Last Update 1:04 PM September 03, 2024 (UTC)

Company Feeds | Identosphere Blogcatcher

Brought to you by Identity Woman and Infominer.
Support this collaboration on Patreon!

Tuesday, 03. September 2024

KuppingerCole

Passwordless Authentication for Enterprises

by Alejandro Leal Explore the rise of passwordless authentication, its security benefits, and how it mitigates common password-based attacks like phishing, brute-force, and ATO fraud. This Buyer's Compass can help you find the solution that best fits your business needs.

by Alejandro Leal

Explore the rise of passwordless authentication, its security benefits, and how it mitigates common password-based attacks like phishing, brute-force, and ATO fraud. This Buyer's Compass can help you find the solution that best fits your business needs.

Tokeny Solutions

ShipFinex and Tokeny Forge Strategic Partnership to Revolutionize Maritime Asset Tokenization

The post ShipFinex and Tokeny Forge Strategic Partnership to Revolutionize Maritime Asset Tokenization appeared first on Tokeny.

Luxembourg, 3rd September 2024 – ShipFinex, a leading innovator in maritime finance, and Tokeny, the pioneering onchain finance operating system for tokenized securities, are proud to announce a strategic partnership aimed at transforming the way maritime assets are tokenized and managed.

This collaboration brings together two industry pioneers with a shared vision of enhancing transparency, security, and compliance in the tokenization of maritime assets. By joining forces, ShipFinex and Tokeny are poised to set a new standard in the digital finance landscape, particularly within the multi-billion-dollar maritime sector.

Elevating Maritime Finance

Shipping and Maritime Finance have been an exclusive asset class/sector due to limited access in public equity markets and significant initial capital requirements to invest in assets, making it challenging for many to participate despite the Shipping market consistently outperforming many other asset classes.

ShipFinex and Tokeny are committed to democratizing access to maritime investments. Through this partnership, ShipFinex will leverage Tokeny’s cutting-edge technology to ensure that all tokenized maritime assets on its platform meet the highest standards of regulatory compliance and security, using the ERC-3643 standard. This integration not only enhances investor confidence but also positions both companies as leaders in the digital transformation of maritime finance.

Strategic Alignment

The partnership between ShipFinex and Tokeny is a strategic alignment that amplifies the strengths of both companies. ShipFinex’s expertise in maritime finance, combined with Tokeny’s proven track record in tokenized securities infrastructure, creates a powerful synergy that is expected to accelerate the growth and adoption of tokenized maritime assets globally.

Following ShipFinex’s recent announcement of receiving initial approval from VARA in the UAE, this partnership underscores the company’s commitment to adopting world-class solutions to enhance its platform’s security and compliance. This collaboration highlights robust infrastructure and innovative regulated approach underpinning ShipFinex’s operations.

Looking Ahead

This strategic partnership sets the stage for future growth and expansion, as both ShipFinex and Tokeny continue to innovate and lead in their respective fields. The integration of their capabilities will facilitate the broader adoption of tokenized maritime assets, offering investors a secure and efficient marketplace.

About ShipFinex

ShipFinex is revolutionizing maritime finance  by providing a secure, transparent, regulated and efficient marketplace for tokenized maritime assets, enabling global investors to access and trade these assets like never before.

About Tokeny

Tokeny is a leading onchain finance operating system. Tokeny has pioneered compliant tokenization with the open-source ERC-3643 standard and advanced white-label software solutions. The enterprise-grade platform and APIs unify fragmented onchain and offchain workflows, integrating essential services to eliminate silos. It enables seamless issuance, transfer, and management of tokenized securities. By automating operations, offering innovative onchain services, and connecting with any desired distributors, Tokeny helps financial actors attract more clients and improve liquidity. Trusted globally, Tokeny has successfully executed over 120 use cases across five continents and facilitated 3 billion onchain transactions and operations.

Website | LinkedIn | X/Twitter

The post ShipFinex and Tokeny Forge Strategic Partnership to Revolutionize Maritime Asset Tokenization first appeared on Tokeny.

The post ShipFinex and Tokeny Forge Strategic Partnership to Revolutionize Maritime Asset Tokenization appeared first on Tokeny.

Monday, 02. September 2024

Dock

Dock and cheqd Form Alliance to Accelerate Global Adoption of Decentralized ID

We are excited to announce that the Dock and cheqd tokens and blockchains are merging to form a Decentralized ID alliance. By harnessing the combined strengths of two industry pioneers, Dock and cheqd will accelerate the global adoption of decentralized identity and verifiable credentials, empowering individuals

We are excited to announce that the Dock and cheqd tokens and blockchains are merging to form a Decentralized ID alliance.

By harnessing the combined strengths of two industry pioneers, Dock and cheqd will accelerate the global adoption of decentralized identity and verifiable credentials, empowering individuals and organizations worldwide with secure and trusted digital identities.

Existing $DOCK tokens will be converted into $CHEQ tokens (pending governance approval from token holders in both communities). This will mark a new chapter of opportunity for our token holders who will benefit from all the Web3 resources cheqd has at their disposal. 

Full article: https://dock.io/post/dock-and-cheqd-form-alliance-to-accelerate-global-adoption-of-decentralized-id


KuppingerCole

SOAR Platforms and Generative AI: Building an AI-Skilled Workforce

by Alejandro Leal From Luddites to AI Legend has it that in 1779, a man named Ned Ludd, angered by criticism and orders to change his traditional way of working, smashed two stocking frames. This act of defiance became emblematic of the “Luddite” movement against the encroaching mechanization that threatened the livelihoods of skilled artisans during the early Industrial Revolution. Throughou

by Alejandro Leal

From Luddites to AI

Legend has it that in 1779, a man named Ned Ludd, angered by criticism and orders to change his traditional way of working, smashed two stocking frames. This act of defiance became emblematic of the “Luddite” movement against the encroaching mechanization that threatened the livelihoods of skilled artisans during the early Industrial Revolution.

Throughout history, workers have adapted to new technologies, from the complex machinery of the Industrial Revolution to today's sophisticated AI systems. Initially, industrial workers had to master mechanical operations to support mass production. Later, the digital revolution demanded proficiency with computers for a variety of tasks.

Now, the integration of AI in workplaces emphasizes skills in managing and leveraging intelligent systems to boost productivity and decision-making processes. This ongoing evolution demonstrates the need for continuous learning and adaptability, underscoring the increasing complexity of skills involved in today’s jobs.

The Evolving Role of Cybersecurity Analysts

Building an AI-skilled workforce requires not only equipping professionals with the tools and knowledge necessary to leverage AI technologies, but also addressing the persistent challenges of the human factor in cybersecurity by implementing the right tools, cultivating a cybersecurity culture, and fostering new skills.

For example, the art of prompt engineering is a relatively new and useful skill. This discipline allows analysts to develop and optimize prompts to use Large Language Models (LLMs) efficiently. These prompts are designed to optimize the language model's performance, ensuring that it produces the desired output with minimal computational resources. For security analysts, generative AI offers a remarkable leap forward in the effectiveness of their work.

The integration of generative AI into Security Orchestration, Automation, and Response (SOAR) platforms has the potential to change the role of Security Operations Centre (SOC) analysts. This technology automates routine tasks, allowing analysts to spend more time on strategic aspects of their roles, such as planning new defensive strategies, identifying emerging threats, and formulating proactive mitigation plans.

Balancing Innovation and Responsibility

However, the potential use of generative AI goes beyond simply automating tasks or interacting with a chatbot. For instance, SOC analysts can now use generative AI to craft detailed playbooks that document the steps taken during an incident response. This documentation process not only automates responses but also builds a knowledge base that can inform future responses.

SOC analysts can also use generative AI to create alerts and perform tasks such as threat detection, incident analysis, summarize events, create reports, enhance decision making, suggest playbook templates, etc. While the integration of generative AI into SOAR platforms offers substantial benefits, there are several challenges that need to be addressed.

Generative AI requires access to vast amounts of data to learn and make decisions. Ensuring that this data is handled securely and in compliance with privacy regulations is a significant challenge. In addition, there is a risk that AI models may develop biases based on the data they are trained on, which can lead to inaccurate or unfair outcomes.

Therefore, the use of generative AI must be accompanied by thorough quality control on the part of the vendor, to ensure that the information provided is indeed useful and accurate. This balanced approach reflects a careful consideration of both the opportunities and the complexities involved with integrating new technologies into security operations.

While some vendors are highly optimistic about the transformative potential of generative AI in SOAR solutions, others remain cautious, choosing to monitor the industry's development closely. These cautious vendors prioritize understanding how to align with customer expectations and carefully evaluate the practical advantages and potential challenges of implementing generative AI.

Great Expectations

By harnessing the potential of generative AI, however, SOC analysts can broaden their scope within cybersecurity practices, cultivating new knowledge and developing new skills.  While Ludd's reaction was to destroy the machines he feared would replace human craftsmanship, the challenge now is not to resist technological advancement, but to integrate it. This approach reflects a broader trend in AI development, where the goal is not to replace human endeavor, but to augment it.

As a result, vendors should prioritize transparency in their marketing to demonstrate the practical value of generative AI, rather than relying on hype or jargon. This approach not only educates customers about the capabilities and limitations of generative AI but also helps in setting realistic expectations. For more on this, see my colleague John Tolbert's blog post on Some Direction for AI/ML-ess Marketing.

Join us in December in Frankfurt at our cyberevolution conference, where we will continue to dissect how AI is used in cybersecurity.

See some of our other articles and videos on the use of AI in security:

Cybersecurity Resilience with Generative AI

Generative AI in Cybersecurity – It's a Matter of Trust

ChatGPT for Cybersecurity - How Much Can We Trust Generative AI?

Asking Good Questions About AI Integration in Your Organization

Reflections & Predictions on the Future Use (and Mis-Use) of Generative AI in the Enterprise and Beyond


Verida

Verida Technical Litepaper: Self-Sovereign Confidential Compute Network to Secure Private AI (Part…

Verida Technical Litepaper: Self-Sovereign Confidential Compute Network to Secure Private AI (Part 3) This is the third and final post to release the “Verida Technical Litepaper: Self-Sovereign Confidential Compute Network to Secure Private AI” and was originally published by Chris Were, CEO and co-founder at Verida. You can catch up with Part 1 and Part 2. Confidential Compute No
Verida Technical Litepaper: Self-Sovereign Confidential Compute Network to Secure Private AI (Part 3)

This is the third and final post to release the “Verida Technical Litepaper: Self-Sovereign Confidential Compute Network to Secure Private AI” and was originally published by Chris Were, CEO and co-founder at Verida. You can catch up with Part 1 and Part 2.

Confidential Compute Nodes

Confidential Compute Nodes running on the Verida Self-Sovereign Compute Network operate a web server within a secure enclave environment to handle compute requests and responses.

There will be different types of nodes (i.e., LLM, User API) that will have different code running on them depending on the service(s) they are providing.

For maximum flexibility, advanced users and developers will be able to run compute nodes locally, on any type of hardware.

Nodes have key requirements they must adhere to:

GPU access is required for some compute nodes (i.e., LLM nodes), but not others. As such, the hardware requirements for each node will depend on the type of compute services running on the node.

Code Verifiability is critical to ensure trust in the compute and security of user data. Nodes must be able to attest the code they are running has not been tampered with.

Upgradability is essential to keep nodes current with the latest software versions, security fixes and other patches. Coordination is required to ensure applications can ensure their code is running on the latest node versions.

API endpoints are the entry point for communicating with nodes. It’s essential a web server host operates within the secure enclave to communicate with the outside world.

SSL termination must occur within the secure enclave to ensure the host machine can’t access API requests and responses.

Resource restraints will exist on each node (i.e., CPU, memory) that will limit the number of active requests they can handle. The network and nodes will need to coordinate this to ensure nodes are selected that have sufficient resources available to meet any given request.

Interoperability and Extensibility

In order to create an efficient and highly interoperable ecosystem of self-sovereign API’s, it’s necessary to have a set of common data standards. Verida’s self-sovereign database storage network provides this necessary infrastructure via guaranteed data schemas within encrypted datasets, providing a solid foundation for data interoperability.

Developers can build new self-sovereign compute services that can be deployed on the network and then used by other services. This provides an extensible ecosystem of API’s that can all communicate with each other to deliver highly complex solutions for end users.

Figure 4: Interoperable data between self-sovereign AI services

Over time, we expect a marketplace of private AI products, services and APIs to evolve.

Service Discovery

Verida’s self-sovereign compute network will enable infrastructure operators to deploy and register a node of a particular service type. When an API needs to send a request to one of those service types, it can perform a “service lookup” on the Verida network to identify a suitable trusted, verifiable node it can use to send requests of the required service type.

User Data Security Guarantees

It is essential to protect user privacy within the ecosystem and prevent user data leaking to non-confidential compute services outside the network. Each service deployed to the network will be running verifiable code, running on verifiable confidential compute infrastructure.

In addition, each service will only communicate with other self-sovereign compute services. Each API request to another self-sovereign compute service will be signed and verified to have been transmitted by another node within the self-sovereign network.

Tokenized Payment

The VDA token will be used for payment to access self-sovereign compute services. A more detailed economic model will be provided, however the following key principles are expected to apply.

End users will pay on a “per-request” basis to send confidential queries to compute nodes and the services they operate. The cost per request will be calculated in a standardized fashion that balances the computation power of a node, memory usage and request time. Applications can sponsor the request fees on behalf of the user and then charge a subscription fee to cover the cost, plus profit, much like a traditional SaaS model.

Node operators will be compensated for providing the confidential compute infrastructure to Verida’s Self-Sovereign Compute Network.

Builders of services (i.e., AI Prompts and Agents) will be able to set an additional fee for using their compute services, above and beyond the underlying “per-request” compute cost. This open marketplace for AI Agents and other tools drives innovation and provides a seamless way for developers to generate revenue from the use of their intellectual property.

Verida Network will charge a small protocol fee (similar to a blockchain gas fee) on compute fees.

Other Use Cases Data Training Marketplaces

Verida’s Private Data Bridge allows users to reclaim their private data from platforms such as Meta, Google, X, email, LinkedIn, Strava, and much more.

Users on the Verida network could push their personal data into a confidential compute service that anonymizes their data (or generates synthetic data) which is made available to various AI data marketplaces. This provides an option for users to monetize their data, without risk of data leakage, while unlocking highly valuable and unique datasets such as private messages, financial records, emails, healthcare data for training purposes.

Managed Crypto Wallets

There is a vast array of managed wallet services available today that offer different trade-offs between user experience and security.

Having an always available cloud service that can protect user’s private keys, but still provide multiple authorization methods for a user is extremely useful to onboard new users and provide additional backup protection measures for existing users.

Such a managed wallet service becomes rather trivial to build and deploy on the Verida self-sovereign compute network.

Verifiable Credentials

Verida has extensive experience working with decentralized identity and verifiable credential technology, in combination with many ecosystem partners.

There is a significant pain point in the industry, whereby developers within credential ecosystems are required to integrate many disparate developer SDK’s to offer an end-to-end solution. This is due to the self-sovereign nature of credentials and identity solutions where a private key must be retained on end user devices to facilitate end-to-end security.

Verida’s self-sovereign compute network can provide a viable alternative, whereby application developers can replace complex SDK integrations with simple self-sovereign APIs. This makes integration into mobile applications (such as identity wallets) and traditional web applications much easier, simpler and viable.

This could be used to provide simple API integrations to enable:

Identity wallets to obtain access to a user’s verifiable credentials End users to pre-commit selective disclosure rules for third party applications or identity wallets, without disclosing their actual credentials Provide trusted, verifiable universal resolvers Trust registry APIs

Any complex SDK that requires a user’s private key to operate, could be deployed as a micro service on Verida’s self-sovereign compute network to provide a simpler integration and better user experience.

Conclusion

Verida’s mission to empower individuals with control over their data continues to drive our innovations as we advance our infrastructure. This Litepaper outlines how the Verida Network is evolving from decentralized, privacy-preserving databases to include decentralized, privacy-preserving compute capabilities, addressing critical issues in AI data management and introducing valuable new use cases for user-controlled data.

As AI faces mounting challenges with data quality, privacy, and transparency, Verida is at the forefront of addressing these issues. By expanding our network to support privacy-preserving compute, we enable the more effective safeguarding of private data while allowing it to be securely shared with with leading AI models. This approach ensures end-to-end privacy and opens the door to hyper-personalized and secure AI experiences.

Our solution addresses three fundamental problems: enabling user access to their private data, providing secure storage and sharing, and ensuring confidential computation. Verida’s “Private Data Bridge” allows users to securely reclaim and manage their data from various platforms and facilite its use in personalized AI applications without compromising privacy.

While we are not focusing on decentralized AI model training or distributed inference, Verida is committed to offering high-performance, secure, and trusted infrastructure for managing private data. We are collaborating with partners developing private AI agents, AI data marketplaces, and other privacy-centric AI solutions, paving the way for a more secure and private future in AI. This empowers users to be confident about the ways their data is used, and receive compensation when they do choose to share elements of their personal data.

As we continue to build on these advancements, Verida remains dedicated to transforming how private data is utilized and protected in the evolving landscape of AI.

You can learn more or get involved at https://www.verida.network/

Verida Technical Litepaper: Self-Sovereign Confidential Compute Network to Secure Private AI (Part… was originally published in Verida on Medium, where people are continuing the conversation by highlighting and responding to this story.


KuppingerCole

Passwordless Authentication for Enterprises

by Alejandro Leal This report provides a detailed examination of passwordless authentication technologies designed for enterprise use cases. As organizations increasingly prioritize robust and streamlined security protocols, the demand for sophisticated passwordless solutions has grown significantly. This report explores the current landscape of enterprise-focused passwordless authentication techn

by Alejandro Leal

This report provides a detailed examination of passwordless authentication technologies designed for enterprise use cases. As organizations increasingly prioritize robust and streamlined security protocols, the demand for sophisticated passwordless solutions has grown significantly. This report explores the current landscape of enterprise-focused passwordless authentication technologies and guides businesses in selecting the most effective solution to meet their security needs. By analyzing the market segment, vendor product and service functionality, relative market share, and innovative approaches, organizations can make informed decisions about their authentication strategies for their employees and systems.

Finema

This Month in Digital Identity — September Edition

This Month in Digital Identity — September Edition Welcome to the September edition of our monthly digital identity series! This month, we’re exploring the critical developments and innovative strategies that are redefining the landscape of digital identity. Here’s a closer look at the essential topics we’ll be covering: AI Enhancing Healthcare Fraud Prevention Artificial Intelligence (AI) is b
This Month in Digital Identity — September Edition

Welcome to the September edition of our monthly digital identity series! This month, we’re exploring the critical developments and innovative strategies that are redefining the landscape of digital identity. Here’s a closer look at the essential topics we’ll be covering:

AI Enhancing Healthcare Fraud Prevention

Artificial Intelligence (AI) is becoming a crucial tool in combating healthcare fraud by analyzing vast datasets in real-time to detect fraudulent activities, particularly through voice biometrics that verify patient identities and prevent unauthorized access to healthcare services. Additionally, there is a growing focus on enhancing patient experiences through digital trust technologies, such as secure digital signatures and messaging platforms, which protect patient data and streamline healthcare processes. Innovations like chip-based ID cards are also being adopted, as seen in Vietnam, to secure patient information and simplify access to healthcare services, reducing the risk of identity theft and fraud. These technological advancements collectively aim to strengthen the integrity of healthcare systems, safeguard patient data, and improve operational efficiency, ultimately enhancing the overall patient experience.

Somalia’s Financial Inclusion Drive

Somalia is advancing its digital transformation with a new Memorandum of Understanding (MoU) between the National Identification and Registration Authority (NIRA) and the Somali Banks Association (SBA) to drive financial inclusion through the national ID program. Launched a year ago, this program aims to provide the 18 million residents with a unified identity, facilitating access to banking services and aligning with global standards. The partnership seeks to enhance financial security, reduce fraud, and streamline banking processes by using the National Identification Number (NIN) for customer verification. This initiative is part of a broader effort to bolster the country’s economy, ensure compliance with international regulations, and increase public trust in financial institutions. The collaboration has been praised by key government figures and international partners, who see it as crucial for Somalia’s development. Ongoing consultations with stakeholders aim to further strengthen the national ID system, making it more impactful in supporting economic growth and modernizing financial services.

Spain’s New Age Verification System

Spain has introduced technical specifications for a new online age verification system aimed at controlling minors’ access to adult content, using W3C Verifiable Credentials (VCs) as the core technology. This approach addresses growing concerns over the negative impact of unrestricted access to adult content on the mental health and social skills of children and teenagers. By implementing W3C VCs, Spain ensures that age verification is conducted securely and privately, without disclosing personal information, thus aligning with GDPR principles. W3C VCs offer unmatched security through advanced cryptographic methods, enhanced privacy by allowing users to share only necessary information, and portability by integrating seamlessly with digital wallets. The system also follows the OpenID For Verifiable Presentations (OpenID4VP) specification, ensuring secure and private verification, and includes a trust management framework to ensure only authorized entities can issue or verify credentials, making it an ideal solution for protecting minors online.

The Digital Travel Credential (DTC)

In the realm of digital identity, numerous digital credentials are vying to replace physical documents, with the European Union’s eIDAS 2.0 and digital driver’s licenses being notable examples. However, none match the Digital Travel Credential (DTC) standard for digital trust, developed by the International Civil Aviation Organization (ICAO), which sets the universal standards for passports. The DTC, designed as the digital equivalent of a passport, offers two types: one created by a user from their physical passport and another issued directly by passport authorities. Indicio and SITA pioneered the implementation of the Type 1 DTC, which is now being adopted by countries and airlines for seamless travel. The DTC’s strength lies in its use of cryptographic verification, ensuring that passport data is securely held on a user’s device without needing to be stored in centralized databases, mitigating risks of data breaches. By scanning their passport, users can verify the authenticity of their data, bind it to their device through biometric checks, and ensure that their digital credentials are trustworthy and tamper-proof. This system provides airlines, airports, and border control with the confidence to streamline travel processes, knowing that the data in the DTC is authenticated, portable, and instantly verifiable.

We look forward to bringing you more insightful updates as we continue to explore the latest trends and innovations in the field of digital identity. Stay tuned for future editions of our monthly segment!

This Month in Digital Identity — September Edition was originally published in Finema on Medium, where people are continuing the conversation by highlighting and responding to this story.


Metadium

POSTECH Adopts Metadium Mainnet-Based Smart Student ID

POSTECH Adopts Metadium Mainnet-Based Smart Student ID Dear Community, We have some exciting news to share. Pohang University of Science and Technology(POSTECH) has adopted a blockchain-based smart student ID using Metadium’s mainnet. This significant achievement demonstrates the excellence and reliability of Metadium’s technology. Here are the unique features that make POSTECH’s smart stu

POSTECH Adopts Metadium Mainnet-Based Smart Student ID

Dear Community,

We have some exciting news to share. Pohang University of Science and Technology(POSTECH) has adopted a blockchain-based smart student ID using Metadium’s mainnet. This significant achievement demonstrates the excellence and reliability of Metadium’s technology.

Here are the unique features that make POSTECH’s smart student ID stand out:

Security and Privacy: Students’ personal information is securely protected through the Metadium mainnet, making it impossible to falsify or tamper with user information.

Convenient Use: Using blockchain-based DID authentication, users can manage their personal information and selectively submit information. Additionally, students can easily issue and use mobile student IDs remotely through their smartphones.

Efficient Management: The university can now issue mobile smart student IDs through an online automated process, in addition to plastic student IDs, enabling more efficient workflow improvements.

This case at POSTECH is an excellent example of how blockchain technology can be applied to make our lives more convenient. Our Metadium team will continue to strive for more universities and institutions to use Metadium’s technology.

We are truly grateful for the unwavering interest and support from the Metadium community. We eagerly look forward to your continued support.

Thank you.

안녕하세요, 메타디움 커뮤니티 여러분!

기쁜 소식이 있습니다. 포항공과대학(포스텍)이 메타디움의 메인넷을 기반으로 한 블록체인 스마트 학생증을 채택했습니다. 이는 메타디움 기술의 우수성과 안정성을 입증하는 중요한 성과입니다.

포항공과대학 스마트 학생증의 주요 특징은 다음과 같습니다. 안전성 및 개인정보 보호: 메타디움 메인넷을 통해 학생들의 개인정보가 안전하게 보호되어 사용자 정보의 위, 변조가 불가합니다. 편리한 사용: 블록체인 기반의 DID인증을 적용함으로써 사용자 스스로 개인정보를 관리할 수 있고 정보의 선택적 제출이 가능해집니다. 또한 비대면으로 모바일 학생증을 발급할 수 있게 됩니다. 또한 학생들은 스마트폰을 통해 비대면으로 간편하게 모바일 학생증을 발급받고 사용할 수 있게 됩니다. 효율적인 관리: 대학 측에서는 플라스틱 학생증과 별도로 스마트학생증을 온라인 자동화 업무 프로세스로 발급할 수 있게 되어 효율적 업무 개선이 가능합니다.

이번 포항공과대학의 사례는 블록체인 기술이 우리의 생활을 어떻게 더 편리하게 만드는데에 적용될 수 있는지를 보여주는 좋은 예시입니다. 저희 메타디움 팀은 앞으로도 더 많은 대학과 기관에서 메타디움의 기술을 사용할 수 있도록 노력하겠습니다.

메타디움 커뮤니티 여러분의 지속적인 관심과 지원에 감사드리며, 앞으로도 많은 성원 부탁드립니다.

감사합니다.

-메타디움 팀

Website | https://metadium.com

Discord | https://discord.gg/ZnaCfYbXw2

Telegram(EN) | http://t.me/metadiumofficial

Twitter | https://twitter.com/MetadiumK

Medium | https://medium.com/metadium

POSTECH Adopts Metadium Mainnet-Based Smart Student ID was originally published in Metadium on Medium, where people are continuing the conversation by highlighting and responding to this story.

Sunday, 01. September 2024

KuppingerCole

Generative AI in SOAR: Balancing Innovation and Responsibility

Generative AI is ubiquitous - anyone can use ChatGPT and other tools for free to create text, images, and more. But generative AI also has potential in the professional environment. Businesses should consider how they can leverage the use of AI with prompt engineering etc. In this episode, Alejandro and Matthias discuss the integration of machine learning and AI into cybersecurity infrastructur

Generative AI is ubiquitous - anyone can use ChatGPT and other tools for free to create text, images, and more. But generative AI also has potential in the professional environment. Businesses should consider how they can leverage the use of AI with prompt engineering etc.

In this episode, Alejandro and Matthias discuss the integration of machine learning and AI into cybersecurity infrastructures, particularly SOARs. The conversation covers the role of generative AI in changing the daily tasks of cybersecurity professionals, the challenges of integrating generative AI into SOAR platforms, the importance of prompt engineering, and the need for a balanced approach to innovation and accountability. It also addresses the security and ethical considerations of using AI in cybersecurity and the general impact of generative AI on different industries.



Saturday, 31. August 2024

FindBiometrics

Academics Weigh In on Ethics and Risks of Biometric Tech

With biometric technologies becoming increasingly commonplace, more academic experts are weighing in on their ethical and regulatory implications. In a lecture at The New York Academy of Science’s, Harvard’s Becerra […]
With biometric technologies becoming increasingly commonplace, more academic experts are weighing in on their ethical and regulatory implications. In a lecture at The New York Academy of Science’s, Harvard’s Becerra […]

SC Media - Identity and Access

Why GenAI requires a heightened focus on security

Legacy tools were not built for the GenAI world, so there’s much work ahead in developing tools and processes that can secure these new GenAI tools.

Legacy tools were not built for the GenAI world, so there’s much work ahead in developing tools and processes that can secure these new GenAI tools.

Friday, 30. August 2024

FindBiometrics

Rezonate’s Mid-Market Identity Security Solution Aims to Reduce Cloud Attack Surface

Identity solutions provider Rezonate has introduced a new identity security solution aimed at mid-market companies, designed to address challenges in managing identities across multi-cloud and SaaS environments. Mid-market organizations often […]
Identity solutions provider Rezonate has introduced a new identity security solution aimed at mid-market companies, designed to address challenges in managing identities across multi-cloud and SaaS environments. Mid-market organizations often […]

Tampa Bay Rays Embrace Biometric ‘Go-Ahead Entry’ for Fans

The Tampa Bay Rays have become the latest MLB team to implement the Go-Ahead Entry facial authentication system, which has been deployed at the team’s Tropicana Field. The Rays are […]
The Tampa Bay Rays have become the latest MLB team to implement the Go-Ahead Entry facial authentication system, which has been deployed at the team’s Tropicana Field. The Rays are […]

Illinoisan Bumble, Badoo Users May Get Payout from $40 Million Biometric Privacy Settlement

Bumble and Badoo users in Illinois may be entitled to a portion of a $40 million settlement following a class action lawsuit alleging violations of the state’s Biometric Information Privacy […]
Bumble and Badoo users in Illinois may be entitled to a portion of a $40 million settlement following a class action lawsuit alleging violations of the state’s Biometric Information Privacy […]

Fingerprint Cards ‘AllKey’ Enables Both Logical and Physical Access Control

Fingerprint Cards AB (“Fingerprints”) has introduced FPC AllKey, a new biometric system designed to enhance both logical and physical access control. The system was developed to integrate seamlessly into various […]
Fingerprint Cards AB (“Fingerprints”) has introduced FPC AllKey, a new biometric system designed to enhance both logical and physical access control. The system was developed to integrate seamlessly into various […]

AI Update: The Least Funny Part of the Hype Cycle

Welcome to the newest edition of FindBiometrics’ AI update. Here’s the latest big news on the shifting landscape of AI and identity technology: A new funding round could bring OpenAI’s […]
Welcome to the newest edition of FindBiometrics’ AI update. Here’s the latest big news on the shifting landscape of AI and identity technology: A new funding round could bring OpenAI’s […]

New Paper Details Synthetic Image Generation for MAD System Training

A new paper, titled “Generating Automatically Print/Scan Textures for Morphing Attack Detection Applications,” explores innovative methods to enhance Morphing Attack Detection (MAD) systems, offering important applications in biometrics and identity […]
A new paper, titled “Generating Automatically Print/Scan Textures for Morphing Attack Detection Applications,” explores innovative methods to enhance Morphing Attack Detection (MAD) systems, offering important applications in biometrics and identity […]

Romania Now Requires Biometric Data from Citizenship Applicants

The Romanian government has implemented new measures regarding the use of biometrics in citizenship. Starting from September 1, 2024, applicants seeking to obtain or reacquire Romanian citizenship will be required […]
The Romanian government has implemented new measures regarding the use of biometrics in citizenship. Starting from September 1, 2024, applicants seeking to obtain or reacquire Romanian citizenship will be required […]

auth0

Deploy Secure Spring Boot Microservices on Azure AKS Using Terraform and Kubernetes

Deploy a cloud-native Java Spring Boot microservice stack secured with Auth0 on Azure AKS using Terraform and Kubernetes.
Deploy a cloud-native Java Spring Boot microservice stack secured with Auth0 on Azure AKS using Terraform and Kubernetes.

Okta Fine Grained Authorization is now Available in Private Cloud on AWS

Now, you can deploy Okta FGA in several AWS regions with high availability and requests per second.
Now, you can deploy Okta FGA in several AWS regions with high availability and requests per second.

SC Media - Identity and Access

Interview with ThreatLocker: Is Application Allowlisting Making a Comeback? - Danny Jenkins - ESW #374


Funding round secures almost $5.9M for Uniqkey

Such funds would be allocated by the Denmark-based startup toward further scaling its password and access management technology to small and medium-sized businesses.

Such funds would be allocated by the Denmark-based startup toward further scaling its password and access management technology to small and medium-sized businesses.


Legislation easing info sharing opt-outs approved in California

Under the bill, all web browsers would be required to integrate an "opt-out preference signal" tool that would allow opt-out requests for all visited websites.

Under the bill, all web browsers would be required to integrate an "opt-out preference signal" tool that would allow opt-out requests for all visited websites.

Thursday, 29. August 2024

FindBiometrics

IDEX Brings Biometric Payment Cards to Indian Market

IDEX Biometrics has announced the introduction of biometric payment cards to the Indian market, in collaboration with a global leader in payment services. India, with over 1 billion payment cards […]
IDEX Biometrics has announced the introduction of biometric payment cards to the Indian market, in collaboration with a global leader in payment services. India, with over 1 billion payment cards […]

Spruce Systems

Why the U.S. Post Office is Key to Fighting AI Fraud

Pending legislation could transform the venerable USPS into a key player in the fight against fraud.

For years now, the United States Postal Service has been struggling to adjust to the digital world, as the decline of letter mail has left the agency’s budget in shambles. That’s a threat to the Postal Service’s role in connecting all Americans.

Fortunately, a bill under consideration in the U.S. Senate, the POST ID Act, would reinvigorate the venerable service for a new era, help improve USPS’s budget woes – and make it a powerful asset for digital security. The bill proposes using physical Post Office locations to offer real-world identity verification – verification that would, in turn, help fight fraud and disinformation online

That’s similar to the way DMV locations in states like California issue both traditional and digital driver’s licenses. But the Post Office could play a much broader role: the bill’s bipartisan sponsors, Bill Cassidy (R-LA) and Ron Wyden (D-OR), want to allow the Post Office to perform identity verifications for an array of private clients, in addition to public sector agencies it already serves. Combined with some product strategy, this new paid service could help to balance the agency’s budget as well.

This new USPS service would be an extension of the agency’s longtime work connecting people against all obstacles. Instead of refusing to stop for “snow nor rain nor heat nor gloom of night,” this new Postal Service would also be tasked with helping overcome hackers.

A Physical Network for the Digital Age

Senator Wyden was absolutely spot-on when he said that “AI deepfakes have added a whole new challenge for the most common [online identity] verification methods. The best way to confirm who someone is, is in-person verification.”

Wyden’s warning came in October of last year, and the threat of AI has only become more obvious since then. That includes a recent report that artificial intelligence was being used to create convincing fake ID cards at an unprecedented scale, and the equally concerning evolution of deepfake tools into the realm of video, allowing convincing live impersonation online.

But those tricks don’t work in the physical world. Only a real, natural human can walk up to the counter at a Post Office and seek identity verification by a fellow human. Not just physical appearance, but also biometrics like fingerprints are much harder to fake in person than online.

There are very few entities of any sort better positioned to conduct that affirmation than the U.S. Post Office. The USPS has a staggering 31,123 locations across practically every corner of America - even without including locations operated under contract. Post Offices can be found in far-flung U.S. territories like Guam, or at the far northern edge of Alaska, guaranteeing new verification services can be accessed by very nearly every American.

Once an identity is verified in person, it can be digitally recorded using new digital identity credential technology that is extremely trustworthy and secure—and even lets users verify their humanness without revealing their identity.

The Power of Cryptography

The Cassidy-Wyden bill would give the USPS new responsibilities for verifying natural humans, and the ability to serve an array of clients would create a new stream of revenue for the agency. Those verifications would then need to be represented as a trustworthy “digital credential” for users to present online. Luckily, such systems already exist, for instance, in the form of the digital driver’s license offered in California and a growing list of other states.

Trustworthy digital credentials rely on a mix of innovative encryption and widely available hardware – specifically, your mobile phone. In broad outline, a credential issuer like the DMV or Post Office would have a unique digital ‘signature’ tied to a secure computer on-site. After conducting identity verification, the USPS office would digitally sign a credential using the “secure element” chip in the recipient’s mobile phone. This credential could then be presented in a variety of contexts to help a user prove their identity.

The details of the “identity” that a user wants to prove can vary widely, and digital credentials of this sort are very flexible. A common feature of digital credentials is what’s known as “selective disclosure,” which lets a credential holder share only the minimum required information in a particular interaction. 

At its most minimal, a digital credential issued by the USPS could prove only that the holder is a real human being without disclosing any other identifying data. As laid out in a recent research paper by a coalition including researchers from SpruceID, this simple “personhood credential” could be a key element in the fight against costly identity fraud and toxic disinformation online.

Expanding the Network of Trust

The incredible omnipresence of USPS locations makes it an ideal candidate, alongside DMVs, to lead the charge for in-person identity verification and issuance. We can still think bigger, though.

Other trusted entities might be brought into the in-person verification network, expanding access and convenience even further. Candidates might include other shippers, such as UPS and FedEx, who have extensive physical networks and address and other data that can help confirm identities. In the most rural or remote parts of America, retailers might be recruited to the network, though they would require significant additional equipment and training. One benefit of allowing certified private sector participants to also provide in-person identity verification is to keep costs low for users and businesses, while incentivizing competition and innovation.

Over time, the identity verification process would also be streamlined for efficiency and convenience. One major potential efficiency would be collecting an applicant’s data online before an in-person verification session, reducing wait times and workloads. Streamlining of this sort would be particularly important since some digitally signed credentials need to be refreshed more often than conventional physical identity documents.

Offering identity verification via Post Office locations would be part of a yet more expansive system of verifications built on a shared standard for data formats, security practices, and privacy measures. The larger system that SpruceID is helping drive forward is flexible, offering various options for credential holders to choose what data they share.

But perhaps the most important yet challenging feature of this emerging system is creating broad access to in-person verification. For that, the good old Post Office will be hard to beat.

To learn more about SpruceID and our approach to fighting AI fraud, visit our website.

Learn More

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.


FindBiometrics

Germany Joins Police FRT Trend After Violent Attacks

Germany’s coalition government has introduced a package of tighter security and asylum policies, prominently featuring measures to combat “violent Islamism.” Among these, law enforcement authorities will be granted permission to […]
Germany’s coalition government has introduced a package of tighter security and asylum policies, prominently featuring measures to combat “violent Islamism.” Among these, law enforcement authorities will be granted permission to […]

SC Media - Identity and Access

2024 SC Awards Finalists: Best Identity Management Solution

This category recognizes the increased importance of identity access and management in today's security strategies.

This category recognizes the increased importance of identity access and management in today's security strategies.


Applause credentials inadvertently exposed

Included in the exposed .env file were Applause's credentials for Marketo, SalesForce, and Gotowebinar systems, which could result in the compromise of sensitive customer information, marketing details, and operational and financial data from its clients.

Included in the exposed .env file were Applause's credentials for Marketo, SalesForce, and Gotowebinar systems, which could result in the compromise of sensitive customer information, marketing details, and operational and financial data from its clients.


SC Awards 2024: Celebrating the Finalists 

Congratulations to the 2024 SC Awards finalists and a big 'Thank You' to the 50-plus judges and their careful considerations.

Congratulations to the 2024 SC Awards finalists and a big 'Thank You' to the 50-plus judges and their careful considerations.


liminal (was OWI)

Link How-To: Curate Actionable Insights and Gain a Competitive Edge with the Market Monitor™

With information overload becoming a constant challenge, quickly accessing relevant and actionable insights is essential to making informed decisions and staying competitive. The Link Market Monitor, powered by expert-in-the-loop AI technology, combines real-time data with expert analysis to cut through the noise and surface what’s important to you—and what you should do about it. By […] The pos
With information overload becoming a constant challenge, quickly accessing relevant and actionable insights is essential to making informed decisions and staying competitive. The Link Market Monitor, powered by expert-in-the-loop AI technology, combines real-time data with expert analysis to cut through the noise and surface what’s important to you—and what you should do about it. By delivering only the most pertinent market signals, it allows you to efficiently spot trends and seize new opportunities. This guide will show you how to use the Market Monitor to tailor insights to your needs, ensuring you’re always a step ahead. Step 1: Accessing the Market Monitor™ From the Dashboard: Navigate to your Link’s dashboard. Look for the Market Monitor widget, which displays recent headlines from your top monitors. Click on the widget to be taken directly to the Monitors Page. Using the Left Navigation Menu: In the platform’s main interface, locate the “Market Monitor” link in the left-hand navigation menu. Click on it to access the Monitors Page. Step 2: Setting Up Your Tailored Monitors On the Monitors Page, you’ll find a list of pre-configured monitors that align with your industry interests, such as “Emerging Technologies,” “Competitive Landscape,” or “Market Trends.” Click the “create new monitor” button to create a new monitor that meets your specific needs. Here, you can specify companies, sectors, themes, keywords, and more to tailor your monitor’s focus. Step 3: Exploring and Curating Insights Opening a Monitor: Click “Open Monitor” on any monitor card you’ve created. You’ll be directed to the Monitor Detail Page, where a curated newsfeed offers real-time insights filtered by your set criteria. Interacting with Curated Content: Scroll through the newsfeed to browse relevant articles and updates. Click on any article to open it in the reading pane, where you can explore the details. Use the filter bar at the top of the page to further refine the content within your monitor, ensuring you see only the most relevant insights. Step 4: Leveraging Expert-in-the-Loop AI for Personalized Insights The Link Market Monitor utilizes expert-in-the-loop AI technology, which combines real-time data with expert analysis to deliver personalized insights. As you interact with the monitors, the AI engine continuously learns from your preferences, fine-tuning the content it delivers to ensure it remains highly relevant to your needs. Step 5: Receiving Real-Time Alerts and Updates Set up real-time alerts to stay informed without the noise. The Market Monitor’s AI engine filters out irrelevant information, sending you only the most pertinent updates. Customize your alerts to focus on key trends, opportunities, and competitive threats, ensuring you never miss a critical development in your industry. Step 6: Sharing Insights with Your Team Collaborating on Strategies: Use the shared monitors to collaborate effectively, ensuring your team is aligned with the latest market intelligence and ready to make informed decisions.

Best Practices:

Regularly Update Your Monitors: As your business goals evolve, update your monitors to reflect new priorities and market conditions. Maximize AI Insights: Leverage the expert-in-the-loop AI to refine and improve the relevance of your insights continuously. Focus on What Matters: Use the real-time signals to stay on top of key developments, allowing you to react swiftly to market changes.

Why the Market Monitor™ is Essential for Business Leaders

Proactive Decision-Making: The Market Monitor™ equips you with the most relevant insights, empowering you to stay ahead of market trends and shifts. By providing timely, actionable information, it allows you to anticipate changes and make decisions that drive your organization forward. Enhanced Strategic Focus: As an business leader, focusing on what truly matters is crucial. The Market Monitor™ filters out irrelevant data and surfaces only the most pertinent signals, ensuring your strategic decisions are based on insights that directly impact your business objectives. Continuous Adaptation: The expert-in-the-loop AI technology behind the Market Monitor™ ensures that the insights you receive are always aligned with current market conditions. As your business environment evolves, the Market Monitor™ adapts to provide you with up-to-date, relevant information, helping you stay agile in a competitive landscape. Collaborative Insight Sharing: Effective leadership involves ensuring your entire team is aligned with the latest intelligence. The Market Monitor™ facilitates seamless collaboration by allowing you to share tailored insights across your organization, enabling informed, unified decision-making. Strategic Empowerment: In a complex and fast-paced industry, having the right information at the right time is crucial. The Market Monitor™ empowers you with the knowledge and tools needed to navigate market complexities confidently, helping you lead your organization to sustained success.

The post Link How-To: Curate Actionable Insights and Gain a Competitive Edge with the Market Monitor™ appeared first on Liminal.co.


SC Media - Identity and Access

2024 SC Awards Finalists: Best Authentication Technology

Particularly in this era of a distributed, often hybrid work environment, authentication of users and devices is critical to ensuring security of systems and data.

Particularly in this era of a distributed, often hybrid work environment, authentication of users and devices is critical to ensuring security of systems and data.


Spherical Cow Consulting

Privacy-Enhancing Technologies: Protecting Human and Non-Human Identities

Privacy-Enhancing Technologies (PETs) are essential for safeguarding digital identities amidst increasing data breaches. They encompass tools like zero-knowledge proofs and advanced biometrics to secure both human and non-human identities in the digital space. As digital identity expands to include non-human entities, PETs are vital for ensuring privacy and security. Zero-knowledge proofs (ZKPs) e

I want to talk about PETs. No, not about my cats (though they are awesome), but about Privacy-Enhancing Technologies.

Not a day goes by without learning about another data breach that is exposing critical details about people and things online. Enter Privacy-Enhancing Technologies (PETs)—a critical component in digital security. These tools, like zero-knowledge proofs and advanced biometrics, are designed to safeguard digital identities while allowing people and things to get work done.

The rise of privacy-enhancing technologies (PETs) like zero-knowledge proofs and advanced biometrics is reshaping how we think about and manage digital identity. But what’s driving this change, and why should it matter to you, whether you’re managing user access or overseeing countless processes and APIs in the cloud?

All Identities Need PETs

Digital identity isn’t just about people anymore. Sure, your personal online identity—how you log in, interact, and transact—remains essential. But increasingly, digital identity also includes non-human entities like software processes, APIs, and entire cloud workloads. These non-human identities need the same attention to security and privacy as human ones, especially as they become more central to how businesses operate.

When I first started thinking about digital identity, it was all about ensuring the right people had access to the right resources. Today, though, we’re dealing with identities that aren’t people at all—identities that exist in the cloud, managing everything from payroll to AI model training, often without any direct human oversight or even a human-like credential. And these identities need to be just as secure, if not more so, given the scale and complexity they operate within.

Human and Non-Human Considerations

Biometrics like facial recognition and fingerprint scanning have long been used to verify human identities. There’s a lot of work in the field of biometrics, especially with concerns about deepfakes making Ye Olde Fashioned liveness detection hardly a thing. But what about non-human identities? While biometrics might not apply directly, the principles of unique identification and secure access certainly do. For instance, in a cloud environment, processes and APIs need to be uniquely identified and authorized—much like a person—but with a focus on speed, scalability, and automation.

So, two challenges: ensuring that human identities are securely managed while also creating systems that can handle the massive scale of non-human identities. Whether it’s a government-issued digital credential or a cloud-based process, the goal is the same: secure, reliable, and privacy-respecting identity management.

Addressing Privacy Concerns with Digital Credentials

Governments are moving towards digital credentials to improve security and convenience. But this shift brings new privacy challenges. For humans, the way these credentials are issued and managed has significant implications for personal privacy. PETs like zero-knowledge proofs are becoming crucial to ensure that sensitive information remains private, even when it’s used to prove identity.

For non-human identities, the concerns are different but equally important. In cloud environments, digital credentials need to be robust enough to manage the complex interactions between countless processes and APIs, all while maintaining strict access controls and minimizing the risk of breaches.

Of course, if it was easy, I wouldn’t be writing about it. Standards organizations like the IETF are trying to define what a credential should look like in a scenario where it may or may not be for a person (that’s work in SPICE). They’re also trying to define the best way to move those credentials around from one cloud service to the next, given those cloud services don’t exactly speak the same languages (that’s work in WIMSE). And these days we can’t have those conversations without considering the privacy implications of all of it.

Zero-Knowledge Proofs: PETs for All Identities

Which takes us to an area I find fascinating: Zero-Knowledge Proofs (ZKPs). ZKPs are a game-changer for both human and non-human identities. They allow for the verification of information without revealing the underlying data, making them perfect for situations where privacy is paramount. To put it another way, a ZKP will tell you that the proof is true without actually exposing any of the data that is included in the proof.  “Is this mobile driver’s license valid” becomes a question that can be answered without exposing any of the data in the mDL. It’s magic, I tell you, pure magic. (And math. Lots and lots of math.)

In the human world, this might mean you will be able to prove your identity without exposing personal details. In the non-human world, ZKPs can help secure interactions between cloud processes, ensuring that only authorized entities can access sensitive data or perform critical operations. This approach not only protects individual privacy but also bolsters the security of complex digital ecosystems.

Why aren’t ZKPs widely deployed? Because the math involved is incredible, and not all devices can actually handle the necessary computations in the time people expect their web pages to load or their APIs to run. But that’s today; tomorrow is going to be an entirely different story as hardware improves.

Visiting the PETs Shop

Technology is at the heart of these advances. From cryptography to AI, new tools are making it possible to protect digital identities against a range of threats. But with great power comes great responsibility. Whether it’s human users at risk from phishing attacks or non-human processes vulnerable to security breaches, there will never be a point where security and privacy are guaranteed. Innovation will always be necessary to get ahead of bad actors.

For human identities, this might mean adopting stronger authentication methods. For non-human identities, it could involve developing more sophisticated ways to manage and secure API interactions across multiple cloud environments. The challenge is ensuring that these technologies are both effective and adaptable, capable of protecting identities at scale.

PETs Need to be Everywhere

As digital identity continues to evolve, the line between human and non-human identities will blur further. In commerce, for example, digital identities—whether of customers or the processes serving them—are becoming central to every transaction. The transactions may trigger any number of APIs and services that go far beyond a single person’s digital identity. And since all problems have not been solved, businesses are going to have to support the innovation necessary to keep their data safe.

Wrap Up – Loving Your PETs

The future of digital identity is definitely not boring! PETs play a crucial role in shaping how we protect digital identities and are definitely worthy of some focused attention. It’s not the only piece of the puzzle in keeping our data safe, but it’s a biggy.

For tech leaders, I’m afraid you have another area of technology you need to keep on your radar. Your organization must engage in shaping privacy-enhancing digital identity solutions. Don’t just install them, think about how they meet tomorrow’s requirements. Better yet, be a part of defining tomorrow’s requirements in the standards being developed today.

For individual contributors like me, it’s crucial to stay informed. Keep up with the latest security practices, and be on the lookout for open calls for comments on the standards that impact this space. Your voice matters in shaping the standards and regulations in this space.

And if keeping track of all this sounds overwhelming, why not let someone else do the heavy lifting? Reach out to me; let’s chat about how I can help by providing regular updates and insights, tailored to your needs. You don’t have to do this alone.

The post Privacy-Enhancing Technologies: Protecting Human and Non-Human Identities appeared first on Spherical Cow Consulting.


IDnow

AML compliance in 2024: Assessing the effectiveness of AMLD6 and EU’s new AML package.

We explore the EU’s new AML package of rules and consider how it will affect the future of compliance in Europe.  Ever since the first directive to combat money laundering and the financing of terrorism was issued in 1991, the European Union has continued to improve and harmonize the legislative arsenal of its member states.  […]
We explore the EU’s new AML package of rules and consider how it will affect the future of compliance in Europe. 

Ever since the first directive to combat money laundering and the financing of terrorism was issued in 1991, the European Union has continued to improve and harmonize the legislative arsenal of its member states. 

In the space of 30 years, six dedicated Anti-Money Laundering Directives (AMLD) have been issued. The first was mainly aimed at combating drug-related offences and introduced the first KYC provisions. The 4th and 5th Directives (AMLD4 & AMLD5) brought in increased transparency obligations, including access to beneficial ownership registers and strengthening controls on virtual currency transactions. With each new iteration, the scope of protection has expanded significantly and now covers many areas, ranging from art dealing to cryptocurrency trading.  

A major development to AML controls came in May 2024 with the release of the AML package, a set of legislative proposals aimed at strengthening the EU’s AML/CFT rules. The AML package aims to close regulatory gaps, strengthen cooperation between member states and ensure uniform application of the rules across the EU.  

Here, we explore some of the new rules and consider the effect it may have on AMLD6 and the future of compliance in Europe. 

5 new changes to AML rules and regulations in 2024.  A new European Anti-Money Laundering Authority (AMLA) has been established and will be operational in Frankfurt from 2025. With a staff of 400, it will centralize anti-money laundering efforts, coordinate national authorities and conduct cross-border investigations. 
  A directive on which mechanisms need to be adopted by member states to improve the AML/CFT regime. 
  A regulation that establishes clear rules on how Financial Intelligence Units (FIUs) should cooperate. 
  Crypto-asset service providers will now be required to collect and store information on the source and beneficiary of the funds for each transaction. This rule, known as the “travel rule”, already exists in traditional finance and requires that information on the source of the asset and its beneficiary travels with the transaction and is stored on both sides of the transfer. CASPs will be obliged to provide this information to competent authorities if an investigation is conducted into money laundering and terrorist financing.  A directive on Access to Centralized Bank Account Registers: This directive makes information from centralized bank registers available to member states. This contains data relating to the identity and location of bank account holders – through a single access point.  What are the major changes to AMLD6? 

AMLD6, which came into force in December 2020, has introduced several new legal provisions and expanded the list of criminal offences related to money laundering. Faced with the diversification of money laundering schemes, it now includes offences that go beyond simple financial crime. There are now 22 additional offences, including environmental crimes, tax crimes and cybercrime.  

AMLD6 also encourages member states to prosecute “facilitators” who help to carry out illegal activities. How member states should prosecute is also being revised and AMLD6 seeks to improve the deterrent effect of existing legislation by imposing tougher penalties. EU member states are now required to impose prison sentences of at least four years for serious money laundering offences, with heavier penalties for repeat offenders. Significant financial penalties are also issued (up to €5 million for individuals), to deprive the culprits of any profit derived from illicit activities. 

Another major development is the expansion of who should be held responsible for money laundering. From now on, legal entities could be liable for money laundering offences committed by their employees. Companies may also be subject to severe penalties, which could result in the company’s closure. Executives may also be held liable for money laundering offences committed within their organization as part of the EU’s plan to adopt “effective, proportionate and dissuasive criminal sanctions“.  

Recognizing the transnational challenges posed by organized crime and money laundering, AMLD6 promotes a rapid and effective exchange of information on suspicious transactions and ongoing investigations, as well as enhanced legal assistance in the collection of evidence and freezing of assets. It also promotes cooperation with specialized European agencies, such as Europol and Eurojust to facilitate the coordination of cross-border investigations. 

Finally, the legislation contains enhanced due diligence provisions for wealthy individuals with assets of more than €50 million, excluding their main residence, as well as an EU-wide limit of €10,000 for cash payments. 

The future of AML compliance. 

The implementation of AMLD6 has significant implications for businesses and financial institutions. Companies will now be required to protect themselves against compliance risks and adopt appropriate control mechanisms and systems, conduct regular audits, and raise awareness among their employees. This includes investing in advanced transaction monitoring and analysis technologies to proactively detect suspicious financial activity. These actions are necessary to protect the integrity of the company, avoid severe penalties, and maintain stakeholder trust. 

In addition, many industries that were not previously required to comply with certain AML regulations will now need to be more transparent with their transactions. For example, from 2029, top-tier professional football clubs involved in large-scale financial transactions, whether with sponsors, advertisers or in the context of player transfers, will have to comply with certain KYC rules. Like the financial sector, football clubs will have to verify the identity of their customers, monitor transactions and report any suspicious transactions to the FIUs. 

As money laundering and terrorist financing is a global problem, measures adopted at EU level must be coordinated with international measures otherwise they will have a very limited effect. The European Union must therefore continue to consider the recommendations of the Financial Action Task Force (FATF) and other international bodies active in AML/CFT. 

The new package of AML rules has now been entered into the EU’s Official Journal, which means that companies will have up to two years to implement some measures and three years for others.  

Building trust through KYC in banking. How can you set up a KYC process that satisfies your customers and meets regulatory requirements? Download now to discover: What is KYC? The importance of KYC in the banking sector Regulatory impact on KYC processes Read now

By

Mallaury Marie
Content Manager at IDnow
Connect with Mallaury on LinkedIn


liminal (was OWI)

The Increasing Role of Behavioral Biometrics for ATO Prevention in Banking

The post The Increasing Role of Behavioral Biometrics for ATO Prevention in Banking appeared first on Liminal.co.

DHIWay

Product tracking, tracing and authenticity using CORD

The post Product tracking, tracing and authenticity using CORD appeared first on Dhiway.

Issue verifiable credentials using MARK Studio

The post Issue verifiable credentials using MARK Studio appeared first on Dhiway.

Ocean Protocol

DF104 Completes and DF105 Launches

Predictoor DF104 rewards available. DF105 runs Aug 29 — Sept 5, 2024 1. Overview Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor. Data Farming Round 104 (DF104) has completed. DF105 is live today, Aug 29. It concludes on September 5. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE 
Predictoor DF104 rewards available. DF105 runs Aug 29 — Sept 5, 2024 1. Overview

Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor.

Data Farming Round 104 (DF104) has completed.

DF105 is live today, Aug 29. It concludes on September 5. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE rewards.

2. DF structure

The reward structure for DF105 is comprised solely of Predictoor DF rewards.

Predictoor DF: Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn.

3. How to Earn Rewards, and Claim Them

Predictoor DF: To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs.

4. Specific Parameters for DF105

Budget. Predictoor DF: 37.5K OCEAN + 20K ROSE

Networks. Predictoor DF applies to activity on Oasis Sapphire. Here is more information about Ocean deployments to networks.

Predictoor DF rewards are calculated as follows:

First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards. Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards.

Expect further evolution in DF: adding new streams and budget adjustments among streams.

Updates are always announced at the beginning of a round, if not sooner.

About Ocean, DF and Predictoor

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord. Ocean is part of the Artificial Superintelligence Alliance.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

DF104 Completes and DF105 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


BlueSky

Crie um Pacote Inicial!

Crie um pacote inicial hoje — convites personalizados que trazem amigos diretamente para o seu espaço no Bluesky.

To learn how to create a starter pack in English, read our guide here.

Hoje, estamos lançando os pacotes iniciais — convites personalizados que permitem que você traga amigos diretamente para o seu espaço no Bluesky!

Um exemplo de pacote inicial.

Recomende feeds personalizados e usuários para ajudar sua comunidade a se encontrar. Comece na aba Pacotes Iniciais no seu perfil do Bluesky.

O que há em um pacote inicial? Feeds personalizados. No Bluesky, você pode definir qualquer algoritmo ou tópico como sua linha do tempo principal. Exemplos incluem Postadores Quietos (posts dos seus seguidores mútuos mais silenciosos) e Colocando em Dia (posts mais populares das últimas 24 horas). Recomendações de quem seguir. Adicione suas contas favoritas e encoraje novos usuários a segui-las. Como criar um pacote inicial? Clique na aba Pacotes Iniciais. No seu perfil, ao lado das abas de mídia e curtidas, você verá uma nova aba. Crie um pacote inicial a partir do seu perfil. Crie um pacote inicial. Use nossa ferramenta de geração automática para criar um pacote inicial ou faça o seu próprio do zero! Você pode criar mais de um pacote inicial. Clique em "Faça um para mim" para obter um pacote inicial pré-preenchido com usuários e feeds personalizados sugeridos. Você pode adicionar ou remover itens desta lista. Ou clique em "Criar" para adicionar usuários e feeds ao seu pacote inicial você mesmo. Defina o nome, a descrição e os usuários e feeds recomendados do seu pacote inicial. Compartilhe seu pacote inicial! Cada pacote inicial vem com um link e um código QR que você pode compartilhar. Envie seu pacote inicial por mensagem para um amigo, compartilhe com sua rede profissional e poste em outros apps sociais! Compartilhe seu pacote inicial! Diga olá! Você será notificado sobre os usuários que se juntarem ao Bluesky através do seu pacote inicial. Quem pode usar os pacotes iniciais?

Qualquer pessoa com uma conta no Bluesky pode criar pacotes iniciais.

Se você ainda não tem uma conta no Bluesky, pode se juntar através do pacote inicial de um amigo e começar com as personalizações recomendadas por ele. Assim que estiver no Bluesky, você pode adicionar/remover essas recomendações e personalizar ainda mais sua experiência.

Se você já está no Bluesky mas quer se integrar a outra comunidade ou obter as recomendações de seu amigo, você também pode usar o pacote inicial dele para adicionar à sua experiência!

FAQ sobre Pacotes Iniciais

Quantas pessoas e feeds posso adicionar ao meu pacote inicial?

Você pode recomendar até 150 pessoas e até 3 feeds personalizados. Novos usuários terão automaticamente os feeds Seguindo e Descobrir fixados.

Como posso compartilhar meu pacote inicial com mais pessoas?

Envie um link por mensagem para seus amigos, poste sobre ele em outras redes sociais, compartilhe com sua rede profissional! Cada pacote inicial vem com uma imagem de prévia gerada automaticamente que mostra o nome do seu pacote inicial e alguns usuários sugeridos para facilitar o compartilhamento.

Como encontro mais pacotes iniciais no Bluesky?

Você pode compartilhar pacotes iniciais diretamente no Bluesky, e verá uma prévia incorporada para esses links. Atualmente, os pacotes iniciais não aparecem na busca, então para encontrar um pacote inicial, um amigo terá que lhe enviar o link ou você poderá ver a prévia incorporada dentro do app do Bluesky.

Fui adicionado como usuário recomendado no pacote inicial de alguém. Posso me remover?

Quando você bloqueia o criador de um pacote inicial, você será filtrado e removido do pacote inicial dele. Você também pode denunciar um pacote inicial para a equipe de moderação do Bluesky (veja abaixo).

Posso denunciar um pacote inicial para a equipe de moderação do Bluesky?

Sim. Você pode denunciar um pacote inicial clicando no menu de três pontos no topo do pacote inicial. A equipe de moderação do Bluesky revisará todas as denúncias e as avaliará de acordo com nossas Diretrizes da Comunidade.

Posso incluir um serviço de rotulagem no meu pacote inicial?

Atualmente, não incluímos serviços de rotulagem nos pacotes iniciais — estamos trabalhando primeiro na melhoria da descoberta desses serviços no app e na confiabilidade dos serviços.

Wednesday, 28. August 2024

Matterium

BEYOND THE OUROBOROS — Finite and Infinite Crypto

Posting on X, Ethereum founder Vitalik Buterin recently expressed his concerns about the chain’s current use case, saying, “This worries me. Because it feels like an ouroboros: the value of crypto tokens is that you can use them to earn yield which is paid for by… people trading crypto tokens”. Famously, the ouroboros is the image of a snake eating its own tail, found in cultures across the world

Posting on X, Ethereum founder Vitalik Buterin recently expressed his concerns about the chain’s current use case, saying, “This worries me. Because it feels like an ouroboros: the value of crypto tokens is that you can use them to earn yield which is paid for by… people trading crypto tokens”. Famously, the ouroboros is the image of a snake eating its own tail, found in cultures across the world from ancient times, and Vitalik has hit the nail on the head here, yes, crypto does just eat itself.

Finite Crypto is — Token trading. A one dimensional, zero sum game where anyone making money does so through someone losing money, not through creating real value. It is just shifting money about. This has only a limited lifetime before capital moves on Infinite Crypto is — Opening up crypto to real world uses. Multi-dimensional, innovative, flexible, forward looking. A non-zero-sum game. Where money is made by creating real world utility that generates true value. This has unlimited potential.

Currently what “crypto” means to most people is a finite, one-dimensional, zero sum game that is just about token trading; any “yield” a token seller gets comes at the expense of another token buyer losing money. The money just goes round in circles, crypto is not generating any new value, it’s moving value from one person to another and relies on new money coming to the market to keep making it possible for existing token holders to cash out. As with a casino, the only winner in the end is the house; whatever someone does, those gas fees still have to be paid. It is all very finite and constrained. Crypto only works because of the dollar’s weakness as it does not suffer from inflation like the dollar does, so crypto buyers try to use it as a hedge against inflation.

Ethereum has the potential to create so much more — Infinite Crypto, but isn’t really being used for anything innovative now, it’s not generating value in any real sense. Token trading is simply a way to move dollars about — token buyers spend their dollars on token, token goes up, maybe token goes down, and someone, somewhere, gains some value, then cashes their tokens out into dollars to spend it in the real world (paying those gas fees on the way). Even when token trading is done in a hundred percent legal way, it is still just moving money from losers to winners, it all just goes round in a circle and doesn’t grow — finite. At the moment, growth in crypto is mostly an illusion, it gets bigger because more retail investors put their savings in, not because crypto does something useful that increases value.

All this was neatly encapsulated, weirdly enough, by a scholar of religion named James P. Carse. He said “There are at least two kinds of games: finite and infinite” and defined them in this way: “A finite game is played for the purpose of winning, an infinite game for the purpose of continuing the play”. Currently crypto is a finite game, but crypto needs to become an infinite game, with evolving rules and boundaries, where the purpose is to keep things going and continue to create new value in as many ways as is possible. We are done with the old crypto — Infinite Crypto awaits, free of the shackles and constraints of the finite token game and open to the multiplicity of reality.

Vitalik understands this better than most and realises its implications saying, “while defi might be great it’s fundamentally capped and can’t be the thing that brings crypto to another 10–100X adoption burst.” Crypto has been around long enough that most people who feel at home with the token market as it is have already bought into it; there may be an incremental growth in numbers perhaps, but not the 10–100x step change that Vitalik sees the potential for. He can, though, see where that’s coming from — “I would love to see a story for where the yield is coming from…that’s rooted in something external”. The next step for Ethereum lies in connecting to infinite possibilities of the real world, in other words.

Crypto as it stands is playing the finite game, Infinite Crypto, is where we need to take things next, breaking out of the current doom loop of finite crypto. Infinite Crypto is where the growth is, that’s what will make sustained money for everyone. If we fail to break out of the doom loop, the capital will eventually go elsewhere and the blockchain will end up like Second Life (do any of you even remember Second Life? Second Life was the future once, long, long ago), a niche digital world with almost no impact on real life. Finite games always end, they become stagnant, innovation stops, they die.

But this is not what Vitalik and the team created Ethereum for; it was created for Infinite Crypto, it started with a vision of transforming the entire world, but it has become limited and massively inward facing, all about those finite zero sum games. You can play the casino game just as happily with Bitcoin as you can with Ethereum, if you really want to, but Vitalik and his team built Ethereum for smart contracts, and the real world is built on contracts. Find a way to enable Ethereum to streamline real world contracts through smart contracts and it starts to generate actual yield, yield for potentially everyone involved, not yield produced by taking money from losers to give to winners (and on a pretty random basis at that), there’s an infinity of opportunity for the taking.

A conservative estimate suggests that there’s half a TRILLION dollars to be gained by enabling efficiency savings in international trade and business, the kind of efficiency savings that Ethereum is eminently well equipped to provide — the International Chamber of Commerce reckon there’s $280 billion in things like import and export deals, currently encumbered with telephone book thick paper documentation (yup, they still print it all out and cart it around), then there’s $100 billion from the deregulation of US real estate commissions that open them up to innovative ways of dealing with property contracts and all that associated paperwork, not to mention real estate in the rest of the world. On top of that there’s likely to be well over a hundred billion in other savings here and there, so half a trillion is probably on the conservative side. Then there’s value-added services in the real world that could use Ethereum — it can deliver proven, valid, data for AI based searches on real estate that prevents the AI from hallucinating, for example. If there is any doubt about its veracity, the data can be checked back to the blockchain and verified.

This is all business that could be transacted over Ethereum, business with real, actual, yield, the kind of yield Vitalik means here, and it is pretty much infinite. Vitalik is reasserting the original vision, he is reminding us of the way, that this was the future once.

Ethereum has had its playpen stage, where idealistic utopians dreamed of a financial system untethered from the state and from tax, and has seen that largely swept away by ruthless speculators and, yes, outright scammers, who have turned the whole space into a dog-eat-dog wilderness (with RFK Jr we’ve seen what happens to your reputation when it’s alleged you eat dogs…. ). Now though, with Vitalik’s lead here, it’s time to grow up and grow out, to connect Ethereum to stuff that generates yield all round and use the business world to drive that 10–100x adoption that Ethereum is ripe for, Infinite Crypto. Right now, crypto risks just stalling out and senescing, it is basically on life support from people sacrificing their futures to buy a bunch of worthless shit coins, and lending on crypto assets is just a way of building up leveraged positions and instruments that have no economic fundamentals — all that technology could be doing mortgages instead; business, with actual yield.

We have every opportunity to make Ethereum economically productive in the real world without breaking the law. The future for the blockchain has never been brighter, but that future is only accessible after the scamming stops, we break out of the loop and attain the infinite.

We need to get back to the original Ethereum vision

This is good news for me. After being the Ethereum launch coordinator in 2015, I set up Mattereum in 2017 to achieve that future. Since then, we’ve been working on laying the foundations, putting the tools in place to enable Ethereum to interact effectively with the real world. We’ve sorted out the lawtech so we can make smart contracts enact real world contracts that are legally binding, and backed them with warranties that work under the 1958 New York Convention on Arbitration, so they stand up in court in any of 170 countries. We have the tools that connect Ethereum to the physical world, the tools that can be used to bring those efficiencies to world trade, that enable novel, creative business solutions to use Ethereum.

Vitalik has given us the direction, we have built the tools — together we can uncoil the snake, Infinite Crypto is within reach.

BEYOND THE OUROBOROS — Finite and Infinite Crypto was originally published in Mattereum - Humanizing the Singularity on Medium, where people are continuing the conversation by highlighting and responding to this story.


Indicio

What you need to know about Mobile Driver’s Licenses

The post What you need to know about Mobile Driver’s Licenses appeared first on Indicio.
A Mobile Driver’s License (mDLs) is a digital specification for a physical driver’s license. Given that driver’s licenses are widely used for identification, it’s likely that a digital version would enjoy similar ubiquity online. Here, we look at what exactly they are (are they verifiable credentials?) their benefits, and why they are not currently widely available.

By Tim Spring

It all starts with the International Organization for Standardization (ISO) 18013 series. In a nutshell, this document creates a common standard for international recognition of a digital driver’s license. 

The standard lays out the scope as follows:

You must use a machine to obtain the mDL.  The mDL must be tied to the mDL holder.  You must be able to authenticate the origin of the mDL data. You must be able to verify the integrity of the mDL data.

Critically, there are two things the standard does not cover:

How the holder’s consent to share their data is obtained.

Any requirements on how the mDL data is stored.

So now we know what the mDL is: it is a driver’s license that can be stored on your mobile device and is tied to you. It can be proven to be as accurate as a physical card because we can prove that it was issued by a proper authority — such as the department of motor vehicles — and prove that the integrity of the data has not been compromised.

But an mDL is not the same as a verifiable credential because the mDL data can technically be stored in a siloed database. However, a verifiable credential, which allows a person to hold their data, could absolutely fit this standard and be used to easily issue mDLs, as they meet all the other requirements laid out above. 

The benefits 

The benefit to using mDLs is similar to the benefits of using verifiable credentials. They are simple to verify and use, convenient, and often more secure than a physical document.

There are guides written on how to spot a fake ID. This is because each state has their own methods for trying to make their driver’s licenses difficult to counterfeit. An mDL offers a much simpler way to verify the identity of a person or their age for eligibility to purchase goods: all you need to do is scan the QR code and the software will tell you. You don’t need a flashlight, or to look for holograms. 

Most people also now have a mobile device that is always with them. Carrying a digital version of your driver’s license allows you to not worry about accidentally leaving your ID somewhere or needing to fish through a bag to find it, it is always at your fingertips.

Lastly, the security features of these mDLs, especially if they are created through verifiable credentials, are hard to match. If the mDL is a verifiable credential, it is essentially immune to forgery because the software can cryptographically verify the origin of the data, and there is an additional layer of security from the data being stored on the holder’s mobile device instead of a centralized database, removing the risk from data breaches. 

Why are these mDLs not commonplace?

One of the reasons why these credentials have not yet been widely adopted is that regulations have not kept up with the technology.

In the US, the REAL ID act of 2005 wasn’t updated until the end of 2020 to include permission for digital and mobile drivers licenses. But the federal government leaves the issuing of driver’s licenses to each state, meaning that the state governments also have to vote on implementation; as of August 2024, only 13 have passed legislation to start issuing mDLs. 

If they are being issued by your state they are not currently a replacement for your license, but an additional way to represent it, meaning that you will likely still have a physical license somewhere. This could be another reason that many haven’t adopted them, they see it as an add-on that is unnecessary.

It’s important to remember that this technology is still new. Many people might not understand or trust it yet, but as the world shifts to be more digital, it will be a big part of how we prove our identity moving forward. 

If you are part of an organization looking into mDL technology, or a better way to prove your identity online, Indicio can help! Get in touch with our team of experts today.

####

Sign up to our newsletter to stay up to date with the latest from Indicio and the decentralized identity community

The post What you need to know about Mobile Driver’s Licenses appeared first on Indicio.


Ontology

Why Elon Musk’s Support for California’s AI Bill Highlights the Need for Decentralization

As AI becomes more embedded in every aspect of our lives, the debate around California’s AI Safety Bill (SB 1047) highlights a critical issue: the risks of centralized AI control. While the bill attempts to mitigate these dangers, the real solution lies in decentralization — distributing control and ensuring that AI systems align with human values, privacy, and security. The Risks of Centralized

As AI becomes more embedded in every aspect of our lives, the debate around California’s AI Safety Bill (SB 1047) highlights a critical issue: the risks of centralized AI control. While the bill attempts to mitigate these dangers, the real solution lies in decentralization — distributing control and ensuring that AI systems align with human values, privacy, and security.

The Risks of Centralized AI

Centralized AI systems, controlled by a few powerful entities, pose significant dangers. We’ve already seen how centralized control can lead to data misuse, biased algorithms, and even AI-driven censorship. When a handful of corporations dictate the direction of AI development, the risks of abuse and manipulation skyrocket. For example, if a single entity controls the data and algorithms behind AI-driven surveillance, the potential for privacy violations and authoritarian control becomes disturbingly real.

Decentralization isn’t a buzzword; it’s the backbone of a system we can trust. Unlike centralized models that concentrate power, decentralization spreads control across a network, making it nearly impossible for any one actor to manipulate or exploit the system. Decentralized identity (DID) systems, for instance, enable individuals to maintain ownership of their digital identities. This ensures that interactions with AI are grounded in verified, user-controlled data — without the risk of breaches or exploitation by a centralized authority.

The Role of Decentralized Identity and Privacy

DIDs, like those powered by Ontology’s ONT ID, are a cornerstone of decentralized AI. In a world where AI might drive everything from financial transactions to governance, ensuring that human values and rights are upheld is critical. Decentralized systems provide a framework where proofs of identity, timestamped transactions, and zero-knowledge proofs can be securely integrated, preventing AI from being hijacked by non-human interests.

Moreover, privacy must be a cornerstone of AI development. Today’s centralized AI models often rely on vast amounts of personal data, raising serious concerns about surveillance and misuse. Decentralized approaches, powered by technologies like zero-knowledge proofs, allow for the validation of data without compromising privacy. This ensures that AI systems remain transparent and accountable, free from the risks of censorship or manipulation.

Global Context and the Future of AI Regulation

California’s AI Safety Bill is part of a growing global trend toward regulating AI. The European Union’s AI Act, for instance, introduces strict guidelines on the use of AI in high-risk areas, but it doesn’t take effect until 2025. Meanwhile, China’s approach to AI regulation is more focused on controlling and harnessing AI for state objectives, often at the expense of individual freedoms. In this landscape, decentralization offers a way to protect innovation while ensuring that AI development remains aligned with democratic values.

By contrast, decentralized AI frameworks ensure that no single entity holds too much power over these systems. They offer a pathway to develop AI technologies that are resilient, transparent, and aligned with public interests. This approach could prevent the kind of monopolistic practices that have plagued the tech industry for years, while fostering innovation in a way that centralized models cannot.

Conclusion: A Call for Decentralized Solutions

The California bill may mean well, but by doubling down on centralization, it misses the mark. We don’t need more gatekeepers; we need systems that empower individuals, protect privacy, and resist censorship. Decentralization isn’t just a technical fix; it’s a moral imperative for the AI-driven world we’re hurtling toward.As discussions around AI regulation continue, it’s clear that decentralization isn’t just a technical choice — it’s a fundamental necessity. By embracing decentralized technologies, we can build AI systems that are not only safe and trustworthy but also aligned with the principles of self-sovereignty and privacy. At Ontology, we’re committed to leading this charge, creating the frameworks that will ensure AI serves humanity — not the other way around.

Read more Ontology snippets here: https://ont.io/news/1086/The-Telegram-CEOs-Arrest-Highlights-the-Urgent-Need-for-Decentralization-and-Privacy-Protections

Why Elon Musk’s Support for California’s AI Bill Highlights the Need for Decentralization was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


Indicio

DNP Launches Platform for Building Decentralized ID-based Digital Credential Issue and Verification System

DNP The post DNP Launches Platform for Building Decentralized ID-based Digital Credential Issue and Verification System appeared first on Indicio.

auth0

Identity Challenges for AI-Powered Applications

What are the Identity security challenges that developers of AI-based applications must be aware of? Let’s explore some of them.
What are the Identity security challenges that developers of AI-based applications must be aware of? Let’s explore some of them.

Trinsic Podcast: Future of ID

Karyl Fowler - From Transmute to Global Trade and the Power of Digital Identity

In this episode, I sit down with Karyl Fowler, co-founder and CEO of Transmute, a company at the forefront of integrating modern identity technology into global trade. Before founding Transmute, Karyl's work in the semiconductor and bioelectronics industries provided her with unique insights into the complexities of global supply chains. We explore a variety of topics, including: The challenges

In this episode, I sit down with Karyl Fowler, co-founder and CEO of Transmute, a company at the forefront of integrating modern identity technology into global trade. Before founding Transmute, Karyl's work in the semiconductor and bioelectronics industries provided her with unique insights into the complexities of global supply chains.

We explore a variety of topics, including:

The challenges of digitizing trade documentation and how Transmute is solving the multi-billion dollar paper problem The evolution of decentralized identity and its application to physical goods and cross-border commerce Key lessons learned from working with regulators and how Transmute has navigated the highly regulated trade industry

Karyl offers valuable perspectives on the future of trade and digital identity, making this an episode you won't want to miss!

You can learn more about Transmute on their website: transmute.industries.

Subscribe to our weekly newsletter for more announcements related to the future of identity at trinsic.id/podcast

Reach out to Riley (@rileyphughes) and Trinsic (@trinsic_id) on Twitter. We’d love to hear from you.


DHIWay

Dhiway makes the Finternet possible

The BIS Working Papers No. 1178 (PDF), authored by Agustín Carstens and Nandan Nilekani, introduces Finternet: the financial system for the future, which holds immense potential for the financial sector and promises a brighter future. The paper outlines a way to unlock the potential within the financial sector by enabling an architecture that draws on […] The post Dhiway makes the Finternet poss

The BIS Working Papers No. 1178 (PDF), authored by Agustín Carstens and Nandan Nilekani, introduces Finternet: the financial system for the future, which holds immense potential for the financial sector and promises a brighter future.

The paper outlines a way to unlock the potential within the financial sector by enabling an architecture that draws on the Internet, decentralization, and unbundling. Dhiway is one of the small core group of companies working on developing the concepts in the paper into a functioning system. With us on this journey are Silence Laboratories, JUSPAY, Rooba Finance, and the Solana Foundation.

At the outset there is an exposition about vision for the Finternet: multiple financial ecosystems interconnected with each other, much like the internet, designed to empower individuals and businesses by placing them at the centre of their financial lives. It advocates for a user-centric approach that lowers barriers between financial services and systems, thus promoting access for all.

The tokenization of real-world assets is an integral component of the finternet. With tokenization comes the need for a well-designed governance system built on regulatory frameworks with which the technology choices are compliant. If you still need to become familiar with asset tokenization, here is a primer written by Suraj Atreya, which is necessary reading material.

Blockchain technology is a key piece of the technology infrastructure, and it is where CORD, our Open Trust Infrastructure, fits in to enable the design of innovative applications and solutions.

The emergence of finternet is not just the blueprint for information technology architecture. It is conceptualised to unbundle the traditional, centralized financial systems using the values of innovation, transparency, enhanced security, cost efficiency and interoperability, all while being very user-centric.

Find more about the work underway at Finternetlab.io

The post Dhiway makes the Finternet possible appeared first on Dhiway.


Samagra and Dhiway come together to build a developer community for CORD.

Samagra Development Associates Private Ltd (“Samagra”), engaged in implementing Code for GovTech (C4GT) to build and sustain developer communities, has joined hands with Dhiway, a leading provider of enterprise Web 3.0 open trust infrastructure, to create communities of innovation around the open-source Layer 1 blockchain framework CORD. Dhiway and Samagra will offer structured mentorship and […]

Samagra Development Associates Private Ltd (“Samagra”), engaged in implementing Code for GovTech (C4GT) to build and sustain developer communities, has joined hands with Dhiway, a leading provider of enterprise Web 3.0 open trust infrastructure, to create communities of innovation around the open-source Layer 1 blockchain framework CORD.

Dhiway and Samagra will offer structured mentorship and outreach engagement programmes for community members to build innovative solutions to solve complex nation-scale challenges using the CORD blockchain.

This partnership will also foster engagement with industry stakeholders, government agencies and regulatory bodies to help build awareness and engagement around Open Trust Infrastructure.

Nitin Kashyap, Senior Vice President and Head of Product at Samagra stated, “India is making remarkable strides in building DPGs and DPI. As we set new benchmarks, it becomes crucial to ensure the adoption, maintenance, and sustainability of DPGs and open-source technology for the public good. Achieving population-scale impact requires a comprehensive, whole-of-system approach. Through initiatives like C4GT, we aim to unite organizations and contributors to drive this mission as a community. Our collaboration with Dhiway marks a significant step forward in strengthening this community.”

K P Pradeep, CSO at Dhiway, emphasized, “Today it is critical that developers acquire the habit, discipline and knowledge for building at scale using the CORD Blockchain framework. The multiplier effect of open standards, open source software, open protocols, and open trust infrastructure will unlock the potential to solve challenges for India and the world. Samagra’s focus on enabling DPGs that fit within a DPI complements our vision of reshaping the digital future.”

About Samagra

Samagra is a mission-driven governance consulting firm, that works exclusively with governments to transform governance. This involves working with the senior political and bureaucratic leadership of states and the Centre on deep systemic reforms, leveraging tech & data, to strengthen the state’s capacity to deliver sustainable outcomes at scale across domains like education, agriculture, skilling, employment, health and public service delivery among others.

About Dhiway

Dhiway is a trust infrastructure company reshaping the digital future through population-scale technology solutions. We enable enterprises and government agencies to address key challenges around data stores, data exchange and data assurance through the CORD Blockchain – a Layer 1 enterprise blockchain technology.

The post Samagra and Dhiway come together to build a developer community for CORD. appeared first on Dhiway.


Integra and Dhiway Partner Up to Expand Verifiable Credentialing

Integra Micro Systems Pvt Ltd (“Integra”), a leading provider of advanced technology products and solutions across sectors such as BFSI, Telecom, Government, Retail/eCommerce, Enterprise, and Airlines, has announced a strategic partnership with Dhiway, a pioneer in enterprise Web 3.0 open trust infrastructure. This collaboration aims to revolutionize the business of verifiable credentialing and dr

Integra Micro Systems Pvt Ltd (“Integra”), a leading provider of advanced technology products and solutions across sectors such as BFSI, Telecom, Government, Retail/eCommerce, Enterprise, and Airlines, has announced a strategic partnership with Dhiway, a pioneer in enterprise Web 3.0 open trust infrastructure. This collaboration aims to revolutionize the business of verifiable credentialing and drive forward application modernization efforts.

Integra’s expertise in Product and Tech Stack Development, Identity Authentication, IT Infrastructure Modernization, Application Modernization, Enterprise Automation, IT/Network Automation, Zero-Trust Architecture, Bot-AI-ML, DevSecOps, and Systems Integration will be instrumental in this joint initiative. By integrating Dhiway’s state-of-the-art Web 3.0 infrastructure, the partnership will enhance the deployment and scalability of digital credentials, streamline automation processes, and modernize infrastructure to effectively manage and verify digital trust and security. This synergy seeks to expand the acceptance network for verifiable credentials, ensuring that modern applications and systems are equipped to handle and secure digital records efficiently.

Mahesh Jain, Managing Director at Integra, stated: “Our partnership with Dhiway marks a significant step forward in our mission to modernize and secure digital ecosystems. By leveraging Dhiway’s cutting-edge Web 3.0 infrastructure, we are poised to transform the landscape of verifiable credentialing. Additionally, we intend to extend our Wallet software to support CBDC, NFTs, and Crypto, utilizing Dhiway’s robust blockchain technology. This collaboration not only enhances our capabilities in application modernization and digital trust but also aligns with our commitment to driving innovation and efficiency across industries. Together, we are setting new standards for digital identity management and trust infrastructure, paving the way for a more secure and reliable digital future.”

Satish Mohan, CEO at Dhiway, emphasized: “We are excited to welcome Integra into the Dhiway ecosystem. Our Open Trust Infrastructure, built on the foundation of Web 3.0 and state-of-the-art cryptography, has revolutionised how organisations secure and exchange data with continuous assurance. This partnership with Integra reinforces our commitment to advancing digital trust, especially within the financial sector. Together, we are poised to redefine the standards for secure and transparent digital ecosystems, delivering unparalleled value to our customers.” 

About Integra Micro Systems Pvt Ltd

Founded in 1982, Integra Micro Systems Pvt Ltd is a leader in innovative solutions for the Government, BFSI, and Telecom sectors. The company has a rich history of pioneering advancements, including being the first to port UNIX on Indian hardware, transitioning to Linux in the mid-90s, and developing the WAP stack for handheld devices. In 2007, Integra introduced the MicroATM device, revolutionizing financial inclusion in India and laying the groundwork for Aadhaar-based payment systems. Today, Integra excels in Digital Transformation, offering solutions in Enterprise Automation, Infra Modernization, Software Development, Systems Integration, AI/ML-based analytics, and advanced digital identity management, driving efficiency and progress across various industries.

About Dhiway

Dhiway is a trust infrastructure company reshaping the digital future through population-scale technology solutions. We enable enterprises and government agencies to address key challenges around data stores, data exchange, and data assurance through the CORD Blockchain – a Layer 1 enterprise blockchain technology.



The post Integra and Dhiway Partner Up to Expand Verifiable Credentialing appeared first on Dhiway.


Dock

A Deeper Look at Credential Monetization and Ecosystem Payments

In our 2023 Masterclass on Reusable Digital Identity, we explained how verifiable credentials simplify organizations’ processes and improve customers’ experience by making it easy to reuse trusted identity data across business partners. This led us to focus our 2024 Roadmap on creating tools to simplify the management of

In our 2023 Masterclass on Reusable Digital Identity, we explained how verifiable credentials simplify organizations’ processes and improve customers’ experience by making it easy to reuse trusted identity data across business partners. This led us to focus our 2024 Roadmap on creating tools to simplify the management of digital identity ecosystems. With the help of our early adopters who provided valuable feedback, Dock Certs now contains simple to use tools for managing the trust relationships in a custom ecosystem.

Full article: https://dock.io/post/a-deeper-look-at-credential-monetization-and-ecosystem-payments


BlueSky

New Anti-Toxicity Features on Bluesky

Trust and Safety (T&S) affects everything — from community policy and spam detection, all the way to the order that replies show up on a post. At Bluesky, the product team works hand-in-hand with T&S to design features that balance safety, ease of use, and fun.

We are publishing a series of blog posts on Trust & Safety efforts at Bluesky. This is the first in the series.

Trust and Safety (T&S) affects everything — from community policy and spam detection, all the way to the order that replies show up on a post. At Bluesky, the product team works hand-in-hand with T&S to design features that balance safety, ease of use, and fun.

In this blog, we’re taking a look at specifically toxicity (harassment, dunking, etc.) and some steps we’re taking to mitigate it from the product perspective. Be sure to update your app to the latest version (1.90) to access many of these features!

Detaching quote posts

As of the latest app version, released today (version 1.90), users can view all the quote posts on a given post. Paired with that, you can detach your original post from someone’s quote post.

This helps you maintain control over a thread you started, ideally limiting dog-piling and other forms of harassment. On the other hand, quote posts are often used to correct misinformation too. To address this, we’re leaning into labeling services and hoping to integrate a Community Notes-like feature in the future.

Note: Like blocks, quote post removals are public data. The Bluesky app won’t list all the quote post removals directly on your post, but developers with knowledge of the Bluesky API will be able to access this data.

Detaching the original post from a quote post. Hiding replies

In app version 1.90, you can now hide replies on your post. Only the original creator of the thread can hide replies. All hidden replies will be placed behind a Hidden replies screen — so they’re still accessible, but much less visible.

Note: Hidden replies – and which posts were hidden by the author – are still public data.

How to hide a reply. Priority notification filters

If you navigate to Notifications and click the Settings cog in the top right corner, you can now manage your notifications more. With the priority notifications feature, you can filter your notifications to only receive updates from people you follow. We hope this is helpful for people with large followings who are always receiving an influx of notifications, and also for people who may not have expected that their post would get so much attention.

We’ll keep tuning this feature and adding additional options for notifications.

Find the priority notifications filter setting in the Notifications tab. Changes to how replies show in timelines

Historically, in the Bluesky app, we show every reply in the Following feed. This means that every reply has the same visibility as a top-level post, which is often not a user’s intention. We’re reducing the frequency of showing replies in the Following feed to only show conversations that involve replies between at least two people you follow.

Additionally, this update should make it much easier for you to update older threads. Now, when you reply to an older thread of yours, it’ll get bumped to the top of your followers’ feeds. (You’ll no longer have to repost your own reply to surface it to your followers.) This update also prevents replies from being separated from the top-level post, making them easier to understand.

How replies are now displayed. Applying blocks to lists

Bluesky has three kinds of lists: starter packs, curational user lists, and moderation lists.

Now, when you block the creator of a starter pack or a curational user list, you’ll be filtered out of any lists they create. (Blocks still have no effect on moderation lists, because that would defeat their purpose.)

Additionally, we’re updating our policies around acceptable list titles and descriptions and will be labeling lists more aggressively. We’ll share more on this in a blog post next week from the Trust & Safety team.

Future work

Product work, especially as it relates to Trust & Safety, is always a continuous effort. We’re also making some updates on our backend infrastructure to combat ban evasion, botnets, and other forms of toxicity.

We’ll be publishing an update next week from the Trust & Safety team on some of these efforts.


TBD

Open Standards at TBD

How TBD is leveraging open standards

At TBD, we are committed to building a decentralized future where users have greater control over their data and organizations can interact in a more open, trustworthy, and secure way. Open standards are the foundation of this vision, enabling the seamless collaboration and interoperability across systems.

Everything we do at TBD is enabled and strengthened by open standards. Our most notable projects, Web5 and tbDEX, are deeply rooted in these open standards. The frameworks for decentralized identifiers (DIDs), verifiable credentials (VCs), and the protocols that facilitate their sharing form the backbone of our work.

Our Approach to Open Standards

Open standards ensure that different systems and organizations can work together seamlessly, creating a cohesive environment where data and identity can move across personal and organizational boundaries.

At TBD, we are deeply involved in several key standards bodies to ensure that the standards we rely on are robust and interoperable:

Decentralized Identity Foundation (DIF): This organization serves as an incubator for new ideas and standards related to decentralized identity. We are actively contributing to several key initiatives here, such as decentralized web nodes and trust establishment protocols.

W3C: The World Wide Web Consortium (W3C) is the authority on web standards, and we are heavily involved in their work on DIDs and VCs. W3C’s role in defining these standards is crucial for ensuring their broad adoption across the web.

OpenID Foundation: We’re also working with the OpenID Foundation to integrate their standards with VCs and DIDs. This work is focused on extending OpenID’s capabilities beyond web-based applications, making them applicable in backend services and mobile environments.

One of our main tasks is ensuring that our software aligns with these standards. Our Web5 spec and tbDEX spec are prime examples of adopting existing specifications to meet our broad interoperability needs.

Current Focus Areas

Our ongoing work in the standards space is focused on several key areas:

Interoperability: We’ve defined an interoperability profile for tbDEX, which outlines the standards we’re using and how they interact. This is a starting point for enabling seamless exchanges on the tbDEX network.

Selective Disclosure: As we look to enhance user privacy and control, we’re exploring the use of selective disclosure credentials. This allows users to share only the information necessary for a specific interaction, rather than their entire credential.

Trust Frameworks: We’re also working on establishing a trust framework that will enable different organizations to agree on legal and compliant ways to trust one another. This is particularly important for interactions on the tbDEX network, where trust is paramount.

Looking Ahead

As we advance our projects, we remain focused on refining our specifications to ensure they are well-defined, thoroughly tested, and widely adopted. This includes ongoing work on the Web5 spec, which we are continuously improving with better test vectors and more robust compliance checks.

We’re also making significant strides with our Rust Core approach, which will form the basis for many of our SDKs. This effort will allow us to support multiple languages more efficiently and ensure greater consistency across our implementations.

The work we’re doing now is laying the groundwork for a decentralized future where users have more control over their data, and organizations can interact in a more open, trustworthy, and secure way. As we move forward, our commitment to open standards will remain at the heart of everything we do.

Get Involved

If you're working on implementing verifiable credentials or DIDs, please reach out!

Join our Discord community for direct access to our team and ongoing discussions. You can also find us on Twitter @TBDevs.

We look forward to your contributions and questions!

Tuesday, 27. August 2024

Finicity

New report: Building the future of bill payments 

In today’s rapidly evolving digital landscape, consumer preferences and expectations are reshaping the way we engage with financial transactions. Choice lies at the heart of consumers’ financial lives, including how… The post New report: Building the future of bill payments  appeared first on Finicity.

In today’s rapidly evolving digital landscape, consumer preferences and expectations are reshaping the way we engage with financial transactions. Choice lies at the heart of consumers’ financial lives, including how they pay their bills — from traditional methods like checks and cards to emerging technologies like account-to-account payments.  

To understand how consumers prefer to pay their bills and why, and how they want to do so in the future, Mastercard surveyed over 2,000 consumers across the U.S. We explored the evolving landscape of consumer payment preferences, focusing specifically on the intersection of choice, convenience, and security, and how these core tenets will shape the future of bill payment.  

Explore some of the highlights of the report below or download the full report here

An overview of bill payments and preferences  

Consumers are looking for a seamless, efficient, secure way to pay their everyday expenses. The research shows that they are consistently turning to credit and debit cards, as well as options where they can pay directly from their bank accounts, like Bill Pay and ACH/e-check options.   

The most often used payment method for recurring bills is topped by credit cards at 47% followed by bill pay features through banks at 41%, debit card at 39% and ACH at 37%.  

Looking forward, respondents are inclined towards similar payment methods for future recurring bills, with credit cards and bill-pay-by-bank features leading the way. This trend underscores the reliability and trust needed for recurring expenses. 

Get all the insights by downloading the full report

Consumers are driven by choice  

Consumers want three fundamental things in their payment experiences: choice, convenience, and security, and they want payment solutions that empower these elements.   

Placing high value on having choice and flexibility in payment methods when paying their bills, an overwhelming number of respondents expect businesses to provide multiple payment options, indicating a strong demand for variety in how they pay.    

However, only 51% of respondents feel they are frequently given the opportunity to choose their preferred payment method. This suggests a sizable gap in businesses meeting these expectations consistently.  

Convenience, cost and security pave the way for open banking  

Based on the data, there is a clear opportunity for more businesses to embrace new kinds of payment methods supported by open banking technology.  

These new methods use consumer-permissioned connections to bank accounts for payment data rather than having the consumer input their card or account and routing numbers.  

The majority of consumers, across all age groups, are open to new pay-by-bank methods that would save billers money and reduce the likelihood of non-sufficient fund returns – as well as offering security, convenience, and support for consumers to manage their finances.  

Download the bill payments report to learn more about how open banking increases choice in bill payments for consumers and businesses, or head over to our open banking blog for inspirational use cases and insights. 

The post New report: Building the future of bill payments  appeared first on Finicity.


TBD on Dev.to

How Web5 and Bluesky are Building the Next Layer of the Web - A Comparative Analysis

As companies increasingly commodify our personal data and privacy breaches make headlines, many technologists are creating user-centered frameworks that empower individuals to control their digital identities and personal information. This concept, known as Self-Sovereign Identity (SSI), enables users to decide what data they share and with whom. While blockchain technology is a popular choice for

As companies increasingly commodify our personal data and privacy breaches make headlines, many technologists are creating user-centered frameworks that empower individuals to control their digital identities and personal information. This concept, known as Self-Sovereign Identity (SSI), enables users to decide what data they share and with whom. While blockchain technology is a popular choice for implementing SSI, companies like TBD are exploring (and even creating) alternative technologies to achieve these goals.

My Perspective on the State of SSI

Our efforts at TBD are part of a larger movement. In fact, there’s a consortium of tech giants and startups working together through the Decentralized Identity Foundation to establish open standards and best practices for SSI, focusing on:

Digital Identity Interoperability Data Ownership Reliable digital verification methods

The SSI industry is making tangible progress, especially in government sectors, as our technological solutions support the advent of Mobile Driver's Licenses.

However, one of my concerns with our industry is every company is implementing varied proprietary methods. Despite aiming to solve similar problems, companies are developing their own unique DID methods, wallets, and tooling. This fragmentation raises questions for me:

Can we achieve widespread adoption with disparate systems? Will the multitude of competing mechanisms overwhelm both users and developers? Will our various systems eventually work in tandem?

In November 2023, I began investigating the answers to these questions through a livestream series where I interviewed SSI experts from different companies. After conducting approximately 30 interviews, these questions remain unanswered. However, I’ve gained more in-depth knowledge about:

Key players in the SSI space Various technical approaches to implementing SSI Real-world applications of SSI Interviewing Bluesky

I most recently interviewed Dan Abramov, creator of Redux and React core team member, about his work at Bluesky and the development of Bluesky's underlying technology – Authenticated Transfer Protocol, or AT Proto for short. I learned that while TBD’s Web5 and Bluesky’s AT Proto share the vision of a decentralized and user-centric web, their approaches and underlying technologies offer a fascinating contrast. I'll examine these parallel approaches in hopes that TBD, Bluesky, and the broader community can gain valuable insights into building infrastructure for the decentralized web.

Building the Next Layer of the Web Similarities

The web as we know it today consists of physical, network, transport, application, and data layers. Instead of replacing the existing architecture altogether, AT Proto and Web5 aim to add a new layer enabling data to exist beyond individual applications. Both provide tools for developers to build apps within their respective ecosystems.

Bluesky actually serves as a reference implementation to inspire developers and showcase AT Proto's potential.

Differences

AT Proto focuses on decentralized social media, while Web5 enables developers to build any type of application, from financial tools to social media to health management. For example, I developed a fertility tracking app during a hackathon to demonstrate personal health data ownership. Additionally, at TBD, we use components of the Web5 SDK to build the tbDEX SDK, an open financial protocol that can move value anywhere around the world more efficiently and cost-effectively than traditional financial systems.

Data Portability Similarities

A common frustration with traditional web applications is that users often lose access to their data when a platform shuts down. Even if a user can export their data—say as a CSV file—it becomes static, no longer live or interactive.This data is essentially lost for most users, especially non-technical ones, as it's difficult to rebuild the ecosystem that once surrounded it. For example, moving from one social media app to another means users lose their followers, viral posts, and reputation and have to start from scratch.

Web5 and AT Proto enable users to take their data from one application to another. For example, if a user leaves Bluesky, which operates on AT Proto, they can migrate their data to another AT Proto-compatible app without losing their social connections or posts. Similarly, if an app built with Web5 were to shut down, a user could bring their data to another Web5 app.

Differences

Data portability on these platforms varies due to different data management approaches. AT Proto uses a federated model where each app operates a Personal Data Server (PDS). The PDS, typically managed by the app provider, stores all user data in a repository tied to the user’s identity. Users can move their repository—containing posts, social graphs, and more—between apps within the AT Proto ecosystem by connecting it to another PDS.

In contrast, Web5 depends on Decentralized Web Nodes (DWNs), which are personal data stores fully controlled by the user. To switch apps, users point the new application to their DWN and specify the types of data users of the app can access.

Use of W3C Standards for Authentication Similarities

Both AT Proto and Web5 leverage the W3C standard called Decentralized Identifiers (DIDs), which are globally unique alphanumeric identifiers that can move with you across different applications. This enables users to maintain their identities consistently across platforms.

While DIDs are often associated with blockchain technology, both Web5 and AT Proto implement a blockchain-less approach. For instance, Bluesky uses a custom DID method called did:plc (DID Placeholder), while Web5 employs did:dht (DID Distributed Hash Table), which anchors DIDs on BitTorrent instead of a blockchain. Learn more about TBD’s DID method here.

Differences

Many developers have told me that the way AT Proto handles authentication is what attracted them to the Bluesky, but many of them don’t even realize that they’re using DIDs under the hood. On Bluesky, users can use one of their existing domain names as their username. Bluesky verifies ownership by performing a DNS lookup to make sure the domain belongs to the user. Once verified, the domain is linked to a DID, and the user is marked as verified on their account.

Web5 also uses DIDs for authentication but in a different way. DIDs eliminate the need for usernames and passwords. Instead, you can log in directly with your DID. This is possible because, in the Web5 ecosystem, every DID has cryptographic keys that securely prove ownership.

Permission Management Similarities

Both AT Proto and Web5 offer permission management systems, but there are key differences in who can manage these permissions.

Differences

AT Proto takes an application-centric approach to permission management. Permissions are defined by applications using schemas called lexicons, which dictate the rules that the PDS follows. As a result, the extent of control users have over their data depends on the permissions set by the application.

Permission management is where Web5 shines. Users define access controls through JSON schemas called Protocols, specifying who can access specific data stored in their DWN. This is why building a fertility tracking app with Web5 was ideal for me: I could explicitly deny social media apps, marketing platforms, and retailers access to my personal health data, while allowing only my healthcare provider and partner to access it.

Special URLs for Data Access Similarities

Most web users are familiar with URLs, which serve as web addresses to retrieve data online. Similarly, AT Proto and Web5 use their specialized URLs to access data within their ecosystems.

Differences

In AT Proto, special URLs start with the prefix at:// and point to data in a user's PDS.

Example: at://alice.com/app.bsky.feed.post/1234 might reference a specific post in a user's social media feed.

In Web5, Decentralized Resource Locators (DRLs) start with the prefix https://dweb and link to data stored in a DWN.

Example: https://dweb/${did}/read/records/${recordId} allows a user to fetch a specific record from a DWN.

Learn More

While I've described some core differences between Web5 and AT Proto, there are more interesting features to explore, including how Bluesky implements algorithmic choice, how Web5 uses W3C's Verifiable Credentials to prove digital identity, and how both platforms refer to individual data pieces as "records." These topics deserve their own deep dives. For now, I encourage you to continue exploring via:

🎥 Watch: My interview with Dan Abramov explaining Bluesky’s implementation

📚 Learn: Check out my SSI expert interview series called tbdTV

🤝 Join: Build with us and join our discussions on Discord.


Spruce Systems

Meet the SpruceID Team: Bryce Einck

If you're a SpruceID client, you may know Bryce! Get to know one of our incredible Technical Success Managers.
Name: Bryce Einck
Team: Product Delivery
Based in: San Diego, CA About Bryce

I began my journey in customer service as a technician at the Apple Genius Bar, where I honed my troubleshooting and customer service skills. From there, I moved into technical operations and integration support for a healthcare all-in-one practice growth solution, where I expanded my expertise by learning PHP and working with IDEs for integration troubleshooting. I then transitioned to a Customer Success Manager and Product Deployment role at a tech startup focused on providing AI customer support solutions for e-commerce brands. In these positions, I gained experience with product deployment, Javascript, and consulting on using AI in customer service.

After a brief gap in work, I was looking for something new. I was excited to become a Technical Success Manager at SpruceID because the technology and privacy surrounding digital identity seemed challenging and important for our future.

Can you tell us about your role at SpruceID?

At SpruceID, I handle the day-to-day between Spruce and the California DMV, manage the priorities and expectations of SpruceID's deliverables, provide technical troubleshooting for any arising issues, and facilitate.

What do you find most rewarding about your job?

I enjoy being part of a process that improves and contributes features to the California DMV Wallet mobile application that benefits the digital identity community. It is fun to be on the edge of new tech and new tech that has yet to be fully standardized.

What has been the most memorable moment for you at SpruceID so far?

The opportunity to travel to Brazil, meet the team, explore new food/culture, and mix local drinks. I also love to surf, and had the opportunity to surf in Brazil as well!

How do you define success in your role, and how do you measure it?

Success in my role is achieved by positively managing expectations and delivering on what is asked for and promised. Success also means supporting my team in any way I can. Measuring success can be hard to define at a startup due to the constantly changing landscape, so I measure it by consistently delivering a high-quality product.

What is your favorite part about working at SpruceID?

I find the team incredibly smart, fun, and supportive!

Fun Facts

What do you enjoy doing in your free time? I enjoy being outdoors, but to stay active, surfing and bouldering are my go-tos year-round. All my other free time is spent with my family and friends, playing overcompetitive card/board games, and cooking.

If you could be any tree, what tree would you be and why? I would choose to be a Redwood tree. I grew up surrounded by them and have always loved how large they get, their ability to grow together in angel rings as a support system, and their fire-resistant qualities.

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.


KuppingerCole

NIS2 - EU Network and Information Security Directive

by Martin Kuppinger NIS2, the revised EU Network and Information Security Directive (EU 2022/2555) entered into force on January 16th, 2023. EU member states are obliged to transfer the directive into national law by October 17th, 2024. NIS2 mandates organizations to strengthen their cybersecurity posture and have proper incident handling and reporting in place. It also extends the scope very sign

by Martin Kuppinger

NIS2, the revised EU Network and Information Security Directive (EU 2022/2555) entered into force on January 16th, 2023. EU member states are obliged to transfer the directive into national law by October 17th, 2024. NIS2 mandates organizations to strengthen their cybersecurity posture and have proper incident handling and reporting in place. It also extends the scope very significantly, affecting an estimated 160,000 organizations within the EU. Thus, organizations must understand where to focus their cybersecurity investments to be prepared for NIS2.

Enhancing Security Frameworks through Zero Trust and Identity Threat Detection and Response (ITDR)

by Paul Fisher In a world that is becoming increasingly digital, it is crucial to have strong security frameworks in place. The shift towards cloud computing, remote work, and digital transformation has expanded the attack surface for organizations, making traditional security models insufficient. This KuppingerCole White Paper explores the integration of Zero Trust principles and Identity Threat

by Paul Fisher

In a world that is becoming increasingly digital, it is crucial to have strong security frameworks in place. The shift towards cloud computing, remote work, and digital transformation has expanded the attack surface for organizations, making traditional security models insufficient. This KuppingerCole White Paper explores the integration of Zero Trust principles and Identity Threat Detection and Response (ITDR) to enhance security frameworks, providing a proactive and comprehensive approach to safeguarding digital assets.

Verida

Verida Technical Litepaper: Self-Sovereign Confidential Compute Network to Secure Private AI (Part…

Verida Technical Litepaper: Self-Sovereign Confidential Compute Network to Secure Private AI (Part 2) This is the second of three posts over the next three weeks to release the “Verida Technical Litepaper: Self-Sovereign Confidential Compute Network to Secure Private AI” and was originally published by Chris Were, CEO and co-founder at Verida. Part 1 is here. Confidential Compute A gr
Verida Technical Litepaper: Self-Sovereign Confidential Compute Network to Secure Private AI (Part 2)

This is the second of three posts over the next three weeks to release the “Verida Technical Litepaper: Self-Sovereign Confidential Compute Network to Secure Private AI” and was originally published by Chris Were, CEO and co-founder at Verida. Part 1 is here.

Confidential Compute

A growing number of confidential compute offerings are being offered by the large cloud providers that provide access to Trusted Execution Environments (TEEs). These include: AWS Nitro, Google Confidential Compute and Azure Confidential Compute. Tokenized confidential compute offerings such as Marlin Oyster and Super Protocol have also emerged recently.

These compute offerings typically allow a container (such as a Docker instance) to be deployed within a secure enclave on secure TEE hardware. The enclave has a range of verification and security measures that can prove that both the code and the data running in the enclave is the code you expect and that the enclave has been deployed in a tamper-resistant manner.

There are some important limitations to these secure enclaves, namely:

There is no direct access available to the enclave from the infrastructure operator. Communication occurs via a dedicated virtual socket between the secure enclave and the host machine (*). There is no disk storage available, everything must be stored in RAM. Direct GPU access is typically not available within the secure enclave (necessary for high performance LLM training and inference), however this capability is expected to be available in early 2025.

(*) In some instances the infrastructure operator controls both the hardware attestation key and the cloud infrastructure which introduces security risks that need to be carefully worked through, but is outside the scope of this document.

The Verida network is effectively a database offering high performance data synchronization and decryption. While secure enclaves do not have local disk access (by design), it is possible to give a secure enclave a private key, enabling the enclave to quickly download user data, load it into memory and perform operations.

While enclaves do not have direct access to the Internet, it is possible to facilitate secure socket connections between the host machine and enclave to “proxy” web requests to the outside world. This increases the surface area of possible attacks on the security of the enclave, but is also a necessary requirement for confidential compute that interacts with other web services.

It is critical that confidential AI inference for user prompts has a fast response time to ensure a high quality experience for end users. Direct GPU access via confidential compute is most likely necessary to meet these requirements. Access to GPUs with TEEs is currently limited, however products such as the NVIDIA H100 offer these capabilities and these capabilities will be made available for use within the Verida network in due course.

Self-Sovereign Compute

Verida offers a self-sovereign compute infrastructure stack that exists on top of confidential compute infrastructure.

Figure 1: Self-Sovereign Compute Architecture

The self-sovereign compute infrastructure provides the following guarantees:

User data is not accessible by infrastructure node operators. Runtime code can be verified to ensure it is running the expected code. Users are in complete control over their private data and can grant / revoke access to third parties at any time. Third-party developers can build and deploy code that will operate on user data in a confidential manner. Users are in complete control over the compute services that can operate on their data and can grant / revoke access to third parties at any time.

There are two distinct types of compute that have different infrastructure requirements; Stateless Confidential Compute and Stateful Confidential Compute.

Stateless (Generic) Confidential Compute

This type of computation is stateless, it retains no user data between API requests. However, it can request user data from other APIs and process that user data in a confidential manner.

Here are some examples of Generic Stateless Compute that would operate on the network.

Figure 2: Verida Personal Data Bridge

Private Data Bridge facilitates users connecting to third-party platform APIs (ie: Meta, Google, Amazon, etc.). These nodes must operate in a confidential manner as they store API secrets, handle end user access / refresh tokens to the third-party platforms, pull sensitive user data from those platforms, and then use private user keys to store that data in users’ private databases on the Verida network.

LLM APIs accept user prompts that contain sensitive user data, so they must operate in a confidential compute environment.

AI APIs such as AI prompt services and AI agent services provide the “glue” to interact between user data and LLMs. An AI service can use the User Data APIs (see below) to directly access user data. This enables it to facilitate retrieval-augmented generation (RAG) via the LLM APIs, leveraging user data. These APIs may also save data back to users’ databases as a result of a request (i.e., saving data into a vector database for future RAG queries).

See “Self-Sovereign AI Interaction Model” from Part 1 for a breakdown of how these generic compute services can interact together to provide AI services on user data.

Stateful (User) Confidential Compute

This type of computation is stateful, where user data remains available (in memory) for an extended period of time. This enhances performance and, ultimately, the user experience for end users.

A User Data API will enable authorized third party applications (such as private AI agents) to easily and quickly access decrypted private user data. It is assumed there is a single User Data API, however in reality it is likely there will be multiple API services that operate on different infrastructure.

Here are some examples of the types of data that would be available for access:

Chat history across multiple platforms (Telegram, Signal, Slack, Whatsapp, etc.) Web browser history Corporate knowledge base (ie: Notion, Google Drive, etc) Emails Financial transactions Product purchases Health data

Each of these data types have different volumes and sizes, which will also differ between users. It’s expected the total storage required for an individual user would be somewhere between 100MB and 2GB, whereas enterprise knowledge bases will be much larger.

In the first phase, the focus will be on structured data, not images or videos. This aligns with Verida’s existing storage node infrastructure that provides and aids the development of a first iteration of data schemas for AI data interoperability.

The User Data API exposes endpoints to support the following data services:

Authentication for decentralized identities to connect their account to a User Data API Node Authentication to obtain access and refresh tokens for third-party applications Database queries that execute over a user’s data Keyword (Lucene) style search over a user’s data Vector database search over a user’s data Connecting Stateful Compute to Decentralized Identities

Third party applications obtain an access token that allows scoped access to user data, based on the consent granted by the user.

A decentralized identity on the Verida network can authorize three or more self-sovereign compute nodes on the network, to manage access to their data for third-party applications. This is via the serviceEndpoint capability on the identity’s DID Document. This operates in the same way that the current Verida database storage network allocates storage nodes to be responsible for user data.

Secure enclaves have no disk access, however user data is available (encrypted) on the Verida network and can be synchronized on demand given the appropriate user private key. It’s necessary for user data to be “hot loaded” when required which involves synchronizing the encrypted user data from the Verida network, decrypting it, storing it in memory and then adding other metadata (i.e., search indexes). This occurs when an initial API request is made, ensuring user data is ready for fast access for third-party applications.

After a set period of time of inactivity (i.e., 1 hour) the user data will be unloaded from memory to save resources on the underlying compute node. In this way, a single User Data API node can service requests for multiple decentralized identities at once.

It will be necessary to ensure “hot loading” is fast enough to minimize the first interaction time for end users. It’s also essential these compute nodes have sufficient memory to load data for multiple users at once. Verida has developed an internal proof-of-concept to verify the “hot loading” concept with user data will be a viable solution.

For enhanced privacy and security, the data and execution for each decentralized identity will operate in an isolated VM within the secure enclave of the confidential compute node.

Stay tuned, the third and final release of the Litepaper will be made available next week.

Verida Technical Litepaper: Self-Sovereign Confidential Compute Network to Secure Private AI (Part… was originally published in Verida on Medium, where people are continuing the conversation by highlighting and responding to this story.


MyDEX

What we do: Identity as a Service

This blog is fourth in a series explaining how Mydex’s personal data infrastructure works. It explains how our platforms help deliver our mission of empowering individuals with their own data: how it enables them to use this data to manage their lives better and assert their human rights in a practical way on a daily basis. Blogs in this series are: What IS a Personal Data Store?

This blog is fourth in a series explaining how Mydex’s personal data infrastructure works. It explains how our platforms help deliver our mission of empowering individuals with their own data: how it enables them to use this data to manage their lives better and assert their human rights in a practical way on a daily basis.

Blogs in this series are:

What IS a Personal Data Store? Personal Data Stores and Data Sharing Connecting ‘data about me’ to the world around me Identity as a Service

Thirty years ago, when the Internet was still a new thing, a joke started doing the rounds. “On the internet,” it said, “nobody knows you’re a dog”.

It was a flippant comment but it was also amazingly prescient. This issue of knowing who the other person is at the end of the line, has continued to dog the provision of digital services ever since.

When you see a friend or family member in the street you can recognise them instantly. In that instant, your brain processes dozens of cues relating to their facial features and expressions, their voice, size and weight, gait, mannerisms and gestures, so that you ‘just know’ it’s them. It does these things so fast and accurately that it seems incredibly simple. But it is not, as robotics and AI practitioners have discovered to their cost over many decades.

None of the cues that our brains process so brilliantly are available when you deal with another person remotely, online. Hence that early Internet joke.

For a society and economy that does more and more things online, this is incredibly important. It’s not just about fraud, though that is a big and ever-present danger. It’s also about simple practicality, efficiency and quality. If people and organisations want to do business with each other online, they need to be able to recognise one another. The whole issue of online or ‘digital identity’ is a sine qua non of all online service provision: without being able to recognise people when they sign up to and use an online service it’s impossible for that services to operate.

Mydex personal data stores are helping to solve this problem, in two ways.

Two meanings of ‘identity’

Before we go any further, there’s one big source of confusion that we need to address. In the context of online interactions and transactions the term ‘digital identity’ is commonly used to mean two very different things. In many conversations and debates, people move seamlessly from one of these meanings and back again without even realising they’re doing it. The result is endless confusion.

One of these meanings is knowing (or at least being pretty confident) that the person (or organisation) that you are dealing with is who they say they are. This is the whole area of identity assurance (sometimes called identity verification). Like all those cues of sight, sound and behaviour that we use to recognise our friends and family, this can involve gathering quite a lot of information about the person and ‘binding’ it to them. So, for example, if you know their name and address and age and that they have this passport number and that driving licence number and so on, the more bits of information you have about them, the more confident you can be that they are who they say they are.

The second meaning of identity is more mundane and administrative, but perhaps even more important. It’s about simply recognising them when they turn up at your front door — when they log in to a website or app for example. This, we call identity authentication.

The two may be connected. For example, a bank might go through a process of identity assurance when first providing them a customer with a bank account. At this stage the bank needs to have lots of details about who the person is. But once that process is complete, all the bank needs to do is recognise that customer when they return to use the service by, for example, use of a username and password and/or other authentication steps. This is the identity authentication bit.

On the other hand, identity assurance and identity authentication might not be connected at all. With some types of service, say when you are subscribing to a newsletter, the service provider doesn’t really need to know who the person is at all. All they need to know is if it’s the same person returning to use that service. In this case, the person could just as well use an invented name such as Mickey Mouse, along with a password like M-Mouse and it wouldn’t really matter. The service could still operate.

Once the ‘relying party’ (the party using the authentication) knows that the person is using the same identifiers, they can then map their activities, records, specific preferences etc to that individual, for their use of the service, without necessarily knowing who they actually are.

Mydex’s role in identity

Mydex’s personal data store infrastructure makes a fundamental contribution to both types of identity challenge. By enabling individuals to amass large quantities of verified attributes (sometimes referred to as verified credentials) about themselves, and to share these verified attributes easily, quickly and safely, our personal data stores go a long way to solving the problem of identity assurance and verification, without the need for privacy invading processes such as ‘identity cards’. You can see more detail about what we do on this front here.

However, the focus of this blog is on the second, practical, administrative matter of identity authentication — what all of us have to do many times a day when logging in to different types of online service.

Here, the current state of play is … a complete mess.

It grew into this mess quite naturally. First off, in the very earliest days of online services, service providers had to recognise customers when they logged in, used and returned to the service. So they invented the username and password.

It’s a pretty neat solution, except for one thing. Every different organisation created its own bespoke process for recognising people when they use a service, requiring individuals to invent (and remember) hundreds or perhaps thousands of different usernames and passwords. (Or, for the sake of convenience, they could use just one username and password, in which case if they ever got hacked the hacker would have access to every single service they had ever used).

This organisation-centric ‘bespoke solution’ to identity authentication multiplied costs and complexity for both people and service providers many times over. Most service providers had no desire to be in ‘the username and password business’ but took it on simply because they had to. It was a cost of doing business.

Then, monopolist digital platforms like Google and Facebook spotted a market opportunity. “If you log in to our service we can use the credentials we have created for you to log you on to other services!” In this way, individuals didn’t have to remember hundreds and different usernames and passwords, and service providers could get out of having to manage their username and password business. How convenient! Social sign-in was born.

On the surface, it looked like an ideal win-win. But there was only a drawback to this ‘solution’ and it is an ABSOLUTELY HUGE drawback. It delivers privacy ‘bleed’ on a gargantuan scale. By letting the digital monopolists provide ‘social sign-in’ services, individuals effectively give them permission to track their movements across their entire internet, gathering data about everything they do online — all to further concentrate power and profits in the hands of these monopolists.

Social sign-in is one of today’s volcano issues and scandals, just waiting to blow up as and when people begin to realise just how deeply invasive and pervasive and exploitative it is — all to escape the inconveniences and costs created by the first faulty attempts to solve the identity authentication problem in an organisation-centric way.

Where Mydex fits in

With Mydex’s Identity (authentication) as a Service (IDaaS) the core idea of social sign-in (e.g. only having to log in once to access many different services) is still achieved but without any privacy bleed. In fact, the goal of a single log-in is achieved while enhancing individuals’ rights and control.

It works like this. When an individual gets their personal data store they set up a username and password by which Mydex can recognise them when they log-in (i.e. no different to any other service provider). They have this for life. Then, once the individual is logged in to Mydex they can use Mydex’s connections with other services that are connected to Mydex to automatically log in to those services too.

This means that individuals can flow from one service to another without ever having to log in to these other services — because all the handshakes are working for them, automatically, behind the scenes, not getting in the way of what they are trying to do.

But this time, there is no data surveillance. Mydex is not tracking the individual anywhere. It is not collecting any information about where they go or what they do online. It is simply using the fact that it has established a secure connection with another service to open a gate and let the individual through, if and when they want to pass through that particular gate (i.e. to that particular service).

Service providers can still minimise their involvement in the username and password service but with an added benefit that, in using Mydex’s IDaaS they are not handing over oodles of data about their customers to Silicon Valley digital monopolies. Any data generated by the transaction or interaction goes into just one of two places: into relying parties’ own systems or into the individual’s personal data store. Never to a third party, including Mydex. That’s because Mydex cannot see any of the data that goes into the individual’s personal data store as explained here.

The result is that both sides benefit from both convenience and efficiency and added safety. Why added safety?

Originally, identity authentication systems were established by organisations to protect their own digital front doors. They were designed to protect the safety of the organisation, not the individual. The Mydex approach is designed to help individuals protect their digital front doors. It’s about empowering citizens with agency; with the information services they need to make their way efficiently and effectively within a complex world of service provision.

Because data about interactions is stored in the individual’s PDS, every time the Mydex ID is used it creates a log which the individual can inspect. For example, it could alert them to the fact that somebody has tried to use their ID to log-in to a service. In this way, the individual gets an audit trail of every use of their Mydex ID. This information is held in their PDS for their use alone, away from prying eyes — information that is NOT handed over to the likes of Google or Facebook.

Just to emphasise: This is data that Mydex itself cannot access because each individual has their own private encryption key to their own PDS. This means that while Mydex holds the data (in encrypted form) in its systems it cannot actually ‘see’ its content.

Extra added value

The above provides a simple summary of Mydex’s Identity as a Service model. But there is more to this simple service than meets the eye.

First, individuals can increase the security of their interaction if they want to, by adding in extra layers of security. They can, for example, require a ‘multifactor authentication process’ whereby an additional piece of information is used to authenticate their identity. This could be a one time code sent to their phone, an email, or from an authenticator app.

Second, The individual can also add other identifiers like email addresses and mobile numbers to their MydexID to protect them from use by anyone else. Registering multiple email addresses and mobile numbers also allows the individual to select any of these alongside their core MydexID itself to login, because they are all linked together. This delivers greater security and protection and also overcomes those issues where people lose access to an email or mobile number. Now they always have back-up routes for accessing their MydexID and linked services.

Third, individuals can set preferences about where notifications may be sent to them, for example a specific email address, a mobile number, or both. Each person has different ways they prefer to get notifications. This gives them the ability to make that choice independently of any relying party (service provider).

This is NOT about giving service providers the power to create hoops for individuals to jump through. It’s about enabling individuals to add extra layers of security if and when they feel they need to. It’s about putting the individual in control.

Fourth, there may be occasions when an individual wishes to log in to a service provider (such as a researcher or survey outfit) where they share information about themselves but want to do so anonymously. They can use their Mydex ID to do this. This is because, along with the Mydex ID comes what we call a ‘universal unique identifier’ (UUID) which hides their Mydex ID and contact details from the service provider.

This UUID acts like a wrapper that hides what is inside. It provides the same guarantees as those provided by the username and password but without actually providing these actual identifiers. It can be used by the service provider to recognise that it is the same person returning to the service without actually knowing who that person is.

This enables researchers who want to participate and work with someone over a period of time to see changes in their behaviours/life without actually knowing who they are. And it enables individuals to participate in such research, safely and securely.

Fifth, the system allows identity authentication to work ‘in reverse’ where, if they have already signed in to a service that’s connected to the Mydex IDaaS, individuals can use the fact that they have logged in to this service to also log in to their personal data store (PDS). There, they can add and update data and manage their preferences, including things like adding more Multi Factor Authentication Options and approving connections between their PDS and subscribers adding data.

Further Benefits

Service providers further benefit in a number of ways. As well as not having to operate their own username and password business, they can use the Mydex ID to connect to the individual’s personal data store (if the individual wants them to connect). This opens the door to safe, secure, permissioned, two way data sharing.

For example, if the individual already holds a profile about themselves in their PDS — a profile containing data usually held in a service provider’s ‘My Account’ functionality — then the individual can simply click a button to provide that information to the service provider. No more having to fill in online forms!

This makes the process of onboarding onto a new service much easier, quicker and safer, especially for smaller organisations.

Service providers can also trigger multi-factor authentication processes if they require it — as do most banks for example. In particularly sensitive situations, it is also possible to create unique identities that only work for that particular transaction and cannot be reused once that transaction has been completed.

Conclusion

Thirty years ago, it was a joke that people didn’t know who they were dealing with when interacting online. Today, it’s no longer a joke. It’s a massive cost and hassle for millions of people and organisations alike. These costs and inconveniences are being gamed and abused to an absurd extent by both frausters and monopolists.

But there are ways to solve this problem safely and efficiently. And Mydex has found a way to do just that.

What we do: Identity as a Service was originally published in Mydex on Medium, where people are continuing the conversation by highlighting and responding to this story.

Monday, 26. August 2024

SC Media - Identity and Access

Texas Dow Employees Credit Union notifies 500,000 of MOVEit breach

The long delay in finding the breach in the Texas credit union case showcases the long tail of the MOVEit incident.

The long delay in finding the breach in the Texas credit union case showcases the long tail of the MOVEit incident.


How Okta is building a security culture

Following a series of embarrassing incidents that undermined trust in its products, Okta is putting security first and foremost for its clients and for itself.

Following a series of embarrassing incidents that undermined trust in its products, Okta is putting security first and foremost for its clients and for itself.


Ontology

The Telegram CEO’s Arrest Highlights the Urgent Need for Decentralization and Privacy Protections

​​The recent arrest of Telegram’s CEO Pavel Durov at a Paris airport is more than just a headline; it’s a stark reminder of the escalating global crackdown on privacy-centric platforms. Durov, who has championed digital freedom, is now facing serious allegations that his platform has been used for illegal activities ranging from money laundering to child exploitation. But beneath these charges lie

​​The recent arrest of Telegram’s CEO Pavel Durov at a Paris airport is more than just a headline; it’s a stark reminder of the escalating global crackdown on privacy-centric platforms. Durov, who has championed digital freedom, is now facing serious allegations that his platform has been used for illegal activities ranging from money laundering to child exploitation. But beneath these charges lies a broader, more urgent issue — the clash between centralized control and the fundamental need for decentralization, censorship resistance, and privacy in our digital lives.

Telegram, like many centralized platforms, operates in a gray area where user privacy is at odds with government demands for access and control. This arrest underscores the vulnerabilities of centralized systems — where a single point of failure, like Durov’s arrest, can jeopardize the entire platform and its user base. The incident raises critical questions: How much control should governments have over communication platforms? And, more importantly, how can we safeguard individual privacy in an increasingly surveilled world?

Decentralized systems offer a compelling solution. Unlike traditional platforms, they are not controlled by any single entity, making them inherently resistant to censorship and external pressure. A decentralized messaging app, for example, would not have a CEO who could be arrested, nor would it have servers that could be easily seized. This structure ensures that users maintain control over their data and communications, rather than relinquishing it to a central authority.

Moreover, decentralized identity (DID) plays a crucial role in this landscape. DID allows individuals to own and control their identities across different platforms without depending on a centralized authority. This is essential in preventing the misuse of personal data and ensuring that privacy remains intact, even if one platform is compromised. In an era where governments and corporations alike are vying for more control over digital spaces, the protection offered by DID is invaluable.

The implications of Durov’s arrest go beyond Telegram. It signals the growing pressure on privacy-focused platforms and the need for a shift toward decentralization. As governments increase their grip on digital communications, the only sustainable path forward lies in systems that are beyond their reach — systems that prioritize individual autonomy, censorship resistance, and privacy. The rise of decentralized identity technologies is not just timely; it’s necessary for preserving the freedom that centralized platforms can no longer guarantee.

In conclusion, Durov’s arrest is a wake-up call. It underscores the fragility of centralized systems in the face of authoritarian pressure and the critical need for decentralized alternatives that respect and protect our privacy. As the battle over digital freedom intensifies, decentralization and decentralized identity will be key to ensuring that the internet remains a space for free and open communication, untainted by the heavy hand of censorship and control.

The Telegram CEO’s Arrest Highlights the Urgent Need for Decentralization and Privacy Protections was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


Spherical Cow Consulting

Digital Identity in the Age of AI: Challenges and Opportunities

AI is revolutionizing digital identity, enhancing security and efficiency across various industries. Adaptive authentication, powered by AI, assesses real-time access risk, reducing cumbersome password prompts for users and bolstering security for companies. However, this reliance on AI for authentication raises privacy concerns due to extensive data access. Moreover, the use of AI for malicious p

Yes, AI is everywhere. And yes, that means it is having an impact (one that will only grow) on the digital identity space. And like most other transformative technologies, the impact will be incredibly positive … and also something to be very concerned about. Now that the paper led by OpenAI asking policymakers, technologists, and standards bodies to think about how to develop mechanisms to identify whether an entity online is a person or an AI (I had a small part in that paper), the whole AI and identity is back at the forefront of my brain.

How AI is Changing Digital Identity Security

As our online identities grow more complex, artificial intelligence (AI) is playing a bigger role in keeping them safe. Organizations use AI to spot all sorts of nefarious activities and protect personal information by analyzing patterns and catching anything out of the ordinary. (Which makes me ask, “what is ordinary and who defines it?” I’d love to have that conversation sometime over beverages.)

AI isn’t just for tech giants—industries like banking and e-commerce are using it to prevent fraud and verify identities. For example, in banking, AI can track transaction habits to flag anything unusual, potentially stopping fraud before it happens. In online shopping, AI helps confirm who you are during transactions, cutting down on the risk of identity theft.

What is Adaptive Authentication?

Adaptive authentication is changing how we verify digital identities. Instead of relying on passwords, this method uses AI to evaluate the risk of an access request in real time. It looks at factors like where the request is coming from, what device is being used, and what time it is.

This approach has big benefits. For users, it means fewer annoying password prompts. For companies, it means stronger security because the system can adjust the level of authentication needed based on the perceived risk. All good stuff, until you look at the amount of data AI must access in order to make these determinations. Privacy advocates have a lot to say about this.

The Challenges of AI in Digital Identity

So let’s talk about the privacy aspects for a moment. While AI offers new ways to secure digital identities, the ramifications when it comes to privacy are huge. AI systems need a lot of data to work effectively, and this raises questions about how that data is collected and used.

Another concern is the potential for AI to be used in malicious ways, like creating deepfakes—fake media that looks real but isn’t. This technology could be used to create false digital identities, making it harder to tell what’s real online.

The European Union’s AI Act tackles the issues of where and how AI might be used, and is the first comprehensive regulation in the world on the subject. But, being the first, there are still significant concerns about whether it is enough. The rest of the world is watching to see what works, what doesn’t, and what they can take away from the effort for their own regulations.

AI’s Role in Different Industries

AI-driven digital identity tools are being used in many sectors, each with unique challenges and applications:

Finance: AI helps detect fraud faster and more accurately by analyzing years of transaction data to spot suspicious patterns. Healthcare: Digital identity is crucial for protecting patient privacy and streamlining services. AI helps verify identities and manage access to sensitive medical records, ensuring secure and personalized care. E-commerce: Online retailers use AI to prevent identity theft by analyzing shopping patterns. AI can flag unusual transactions that may indicate fraud, protecting both the customer and the retailer.

Is there an industry that AI won’t touch? If that industry has any kind of online presence, then I’d say no, probably not.

The Global View: Working Together on AI and Digital Identity

Digital identity challenges aren’t confined to one country—they’re global. Just like when thinking about the Internet, commerce, and human migration, geopolitical boundaries are just another consideration when it comes to digital identity. I’ve already mentioned the EU’s AI Act. If you’re following this space at all, you should also be aware of the OECD’s AI Principles, initially published in 2019 and updated earlier this year (May 2024). If you’re in the US, you really need to check out the Executive Order President Biden’s administration posted in October 2023, “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.”

It’s always fascinating (and a little scary) when technology outpaces the law. Of course, it’s not all that great when the law outpaces technology and starts to make stuff up about what’s possible. If it wasn’t my digital identity and that of my 8 billion fellow humans, I’d heat up some popcorn and watch the demolition derby that is technology standards and regulations.

Wrap Up

So, yup, AI is having a big impact on digital identity. It’s making things safer, improving user experiences, and helping industries operate more efficiently. But with these benefits come challenges, especially around privacy and security.

For tech leaders, you kind of don’t have a choice. Your organization needs to get involved in shaping AI-driven digital identity solutions. By adopting these technologies now AND following the principles that exist to make it safe for your employees and customers, you will improve your organization’s security and efficiency. If you don’t, the hackers of the world will thank you.

And if you’re an individual contributor like me, stay on top of the tech news for the latest in security recommended practices. Look for any open calls for comments on the standards and principles that impact this space.

Of course, if you’d like to outsource paying attention to all this and get someone to write a monthly report on the latest, reach out to me, and we’ll see what’s possible.

The post Digital Identity in the Age of AI: Challenges and Opportunities appeared first on Spherical Cow Consulting.


Ontology

Unleash Your Inner Ontonaut with OntoNex Level

Are you ready to take your journey with Ontology to the next level? Introducing the OntoNex Level Program — our latest initiative designed to reward you for being an active part of the Ontology community. Whether you’re a conversation starter, network builder, or community guardian, there’s a role for you to shine and earn rewards along the way. What’s OntoNex Level All About? The Onto

Are you ready to take your journey with Ontology to the next level? Introducing the OntoNex Level Program — our latest initiative designed to reward you for being an active part of the Ontology community. Whether you’re a conversation starter, network builder, or community guardian, there’s a role for you to shine and earn rewards along the way.

What’s OntoNex Level All About?

The OntoNex Level Program is more than just a rewards system; it’s a pathway for you to maximize your potential within the Ontology ecosystem. Each role is tailored to match your strengths and passions, allowing you to contribute meaningfully and earn coins that can be redeemed for exclusive rewards.

The Roles: Chatster: Energize the community with engaging conversations. Unlock achievements and earn coins with every message. Inviter: Grow our network by inviting new members. Earn 10 coins for each successful invite. Guard: Maintain a safe and welcoming environment by reporting spam. Earn 10 coins for every spam report. Helper: Share your Ontology knowledge by assisting others. Earn 10 coins for each helpful interaction. Campaigner: Participate in various community campaigns and events. Earn 5 coins for every event you join. Level Up and Unlock Exclusive Rewards

As you accumulate coins, you can redeem them for special rewards:

100 coins: Buy a Loyal NFT Plus. 2000 coins: Unlock the ‘Monthly NFT Receiver’ role, and receive an NFT every month. 5000 coins: Unlock the ‘Weekly NFT Receiver’ role, and receive an NFT every week. Track Your Progress

Stay on top of your achievements with these simple commands:

/achievement: See your progress in completing achievements. /coins: Check your current coin balance. /buy: Purchase items with your coins. /item: View the items you already own. Join Us on Discord!

Ready to dive in? The best way to get started is by joining our Discord community, where you can take on your role, engage with fellow Ontonauts, and start earning rewards today. Click here to join our Discord.

Unleash Your Inner Ontonaut with OntoNex Level was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.

Sunday, 25. August 2024

KuppingerCole

WAF, WAAP, What? The Evolution of Web Application Firewalls

What makes a Web Application Firewall (WAF) a Web Application and API Protection (WAAP) solution? How is the landscape of the market changing and does every organization need a WAAP solution? Tune in to this episode of the Analyst Chat with guest Osman Celik and host Matthias Reinwarth to learn more. Dive deeper into the topic

What makes a Web Application Firewall (WAF) a Web Application and API Protection (WAAP) solution? How is the landscape of the market changing and does every organization need a WAAP solution? Tune in to this episode of the Analyst Chat with guest Osman Celik and host Matthias Reinwarth to learn more.

Dive deeper into the topic



Friday, 23. August 2024

Elliptic

OFAC targets Russian war effort with 400 sanctions, identifying a crypto address connected to KB Vostok

The US Treasury’s Office of Foreign Assets Control (OFAC) has today issued sanctions against nearly 400 individuals and entities whose products and services enable Russia to sustain its war effort and evade sanctions.  Amongst those sanctioned today is KB Vostok (A.K.A. Vostok Design Bureau) a drone manufacturer which specialises in the “development of industrial-grade unmanned ae

The US Treasury’s Office of Foreign Assets Control (OFAC) has today issued sanctions against nearly 400 individuals and entities whose products and services enable Russia to sustain its war effort and evade sanctions. 

Amongst those sanctioned today is KB Vostok (A.K.A. Vostok Design Bureau) a drone manufacturer which specialises in the “development of industrial-grade unmanned aerial vehicles”. 


Dock

ISO 18013-5 Standard: What It Is And How It Works

With the growing adoption of digital identity initiatives, it has become more complex to ensure security, interoperability, and compliance, requiring adherence to rigid and evolving standards. This is where ISO 18013-5 comes into play, offering a standardized approach to secure and verify digital identities. It's

With the growing adoption of digital identity initiatives, it has become more complex to ensure security, interoperability, and compliance, requiring adherence to rigid and evolving standards.

This is where ISO 18013-5 comes into play, offering a standardized approach to secure and verify digital identities. It's the backbone of mobile driver’s licenses (mDL) implementations, providing guidelines that enhance trust and facilitate verification processes.

In this post, we'll explore ISO 18013-5, covering its definition, benefits for governments, businesses, and individuals, and development history.

Full article: https://www.dock.io/post/iso-18013-5


KuppingerCole

The Anatomy of Cyber Resilience

by Osman Celik In today's business landscape, cyber resilience is crucial for an organization's ability to sustain operations and deliver desired outcomes in the face of cyber threats or incidents. Cyber resilience encompasses not only the prevention and protection against cyber threats but also the ability to detect, respond to, and recover from them effectively. While often confused with cybers

by Osman Celik

In today's business landscape, cyber resilience is crucial for an organization's ability to sustain operations and deliver desired outcomes in the face of cyber threats or incidents. Cyber resilience encompasses not only the prevention and protection against cyber threats but also the ability to detect, respond to, and recover from them effectively. While often confused with cybersecurity, cyber resilience serves a distinct purpose within an organization's risk management strategy.

Cybersecurity vs. Cyber Resilience

Cybersecurity primarily focuses on protecting systems, networks, and data from unauthorized access. This is achieved through mechanisms such as firewalls, encryption, detection and response systems, and identity and access management. In contrast, cyber resilience goes a step further by ensuring business operations continue during and after a cyber incident. While cybersecurity aims to prevent incidents, cyber resilience assumes that breaches may occur and emphasizes maintaining business continuity and facilitating swift recovery.

The Inevitable Future with AI

As AI continues to integrate into our daily lives, it is inevitable that it will play a significant role in maintaining business continuity. However, this development presents both opportunities and challenges. On one hand, AI-powered tools enhance cyber resilience by improving detection and response times, as well as predicting and mitigating potential vulnerabilities. These technologies enable more sophisticated automation and reduce the impact of human error. On the other hand, AI also introduces new risks, as attackers leverage the same technologies to develop more advanced and sophisticated attacks.

Developing Cyber Resilience Strategies

Creating effective cyber resilience strategies involves thorough risk assessment, proactive planning, and continuous improvement. Organizations must begin by identifying their critical assets and assessing potential threats to understand their specific cyber threat landscape. With this information, they can establish a tailored cyber resilience framework.

A robust cyber resilience framework typically includes preventive measures like regular security updates and employee training, alongside incident detection and response protocols. Building resilience also requires regularly testing recovery and backup plans. Organizations should adapt their strategies based on lessons learned from past incidents and anticipate future challenges, which requires expertise, skill, and informed predictions.

Key Components of Cyber Resilience

Cyber resilience provides organizations with clear guidelines on restoring operations after a cyber incident. This involves well-defined recovery plans that are regularly tested and updated to address emerging vulnerabilities. Identifying critical systems and data is a priority, allowing organizations to focus their recovery efforts where they are needed most.

A cornerstone of cyber resilience is data backup. Without a reliable backup, a recovery plan is essentially ineffective. Backup strategies should be integrated into the broader resilience framework, with backups regularly updated and securely stored in multiple locations to protect against cyber threats. The emphasis is not just on creating backups but also on ensuring the ability to quickly access and restore data from these backups without compromising security or operational continuity.

Choosing the Right Frameworks for Your Cyber Resilience Strategy

When developing a cyber resilience strategy, organizations should consider key frameworks. The NIST (National Institute of Standards and Technology) Cybersecurity Framework offers a well-established approach with its six pillars: Identify, Protect, Detect, Respond, Recover, and Govern. Additionally, regulations such as DORA (Digital Operational Resilience Act) and NIS2 (Network and Information Systems Directive 2) should be reviewed, particularly by organizations operating within the European Union, to ensure that backup and recovery strategies are compliant and robust.

We are back in town - cyberevolution 24

We are excited to invite you to our cyberevolution event in Frankfurt on December 3-5, 2024. We will be exploring a wide range of cybersecurity topics, with plenty of chances to chat with industry experts. Cyber resilience will be one of the big topics on the agenda. In a combined session, Mike Small will discuss “Why you need data backup and how AI can help” and Joshua Hunter will provide insights into “Focus on Cyber Resilience - Prepare, Respond, Resume”. We look forward to seeing you there and have some great discussions.


auth0

Developer Day 2024: A Sneak Peek

Take a sneak peek at DevDay. We have created 24 hours of content for you to level up your identity skills through talks, panel discussions, labs and much more!
Take a sneak peek at DevDay. We have created 24 hours of content for you to level up your identity skills through talks, panel discussions, labs and much more!

Tokeny Solutions

56% of Fortune 500 Are Onchain: APIs Are Your Key to Staying Ahead

The post 56% of Fortune 500 Are Onchain: APIs Are Your Key to Staying Ahead appeared first on Tokeny.

Product Focus

56% of Fortune 500 Are Onchain: APIs Are Your Key to Staying Ahead

This content is taken from the monthly Product Focus newsletter in August 2024.

Demand for onchain services is growing rapidly, with institutions increasingly moving into the space. According to recent research by Coinbase, 56% of Fortune 500 executives report that their companies are actively working on onchain projects. Tokenized assets, like T-bills, have become some of the hottest investments, with the value of tokenized US Treasury products soaring over 1,000% since the start of 2023, reaching $1.29 billion.

Crypto hedge funds and market makers are leveraging tokenized assets, such as BlackRock’s BUIDL, as collateral for trading coins and tokens, unlocking unique onchain opportunities not available offchain.

Source: Coinbase

The Challenges of Building Onchain Solutions In-House

While institutions are eager to move onchain, success hinges on control, compliance, and interoperability within open DLT infrastructures. Building these solutions in-house is risky, time-consuming, and expensive. That’s where our T-REX Engine, a suite of onchain APIs, comes in.

Why APIs Are the Backbone of Onchain Expansion

APIs are crucial because they enable seamless integration with existing systems, allowing institutions to build onchain capabilities without the need to overhaul their current infrastructure. They provide the flexibility and scalability needed to adapt to new market demands and regulatory requirements, making the transition to onchain more efficient, less risky, and faster. By offering modular, plug-and-play functionality, APIs ensure that institutions can quickly develop, deploy, and manage onchain services, keeping them ahead of the competition in a rapidly evolving market.

Introducing the T-REX Engine

The T-REX Engine is designed to empower institutions with a customizable onchain solution and a fully integrated ecosystem that ensures a plug-and-play experience. Here’s what sets T-REX apart:

Most Proven Tokenization Engine: Shaped by over 120 tokenization use cases over the past seven years, T-REX has developed 1,000+ features, leveraging the best market standards like the ERC-3643 framework. Incomparable Ecosystem: With one access point, you connect to everything you need both onchain and offchain to manage tokenized securities and cash, thanks to our comprehensive ecosystem. Banking-Grade Security: We implement banking-grade security measures, with certifications like SOC2 and a 10/10 security score from smart contract audits, to secure your onchain operations.

By integrating with your existing systems, T-REX Engine enables you to customize tokenization use cases, providing your clients with an e-commerce-like asset purchase experience, a PayPal-like asset transfer experience, and new opportunities like interoperable assets with DeFi lending apps.

Here’s a brief overview of the T-REX Engine APIs:

Identities APIs: Ensure compliance with onchain identity management by tracking ownership and enforcing eligibility rules. Assets APIs: Manage the entire lifecycle of tokenized assets, from securities, cash, to real-world assets, with streamlined, all-in-one access. Offers APIs: Once tokenized, your assets can be made available anywhere onchain with enforced compliance, Offers APIs help you manage this effortlessly, covering any kind of distribution in both primary and secondary markets.

Learn more about these APIs on our website here or reply to this email to unlock API access and start building your own onchain system today.

Subscribe Newsletter

This monthly Product Focus newsletter is designed to give you insider knowledge about the development of our products. Fill out the form below to subscribe to the newsletter.

Other Product Focus Blogs 56% of Fortune 500 Are Onchain: APIs Are Your Key to Staying Ahead 23 August 2024 The Journey to Becoming the Leading Onchain Finance Operating System 19 July 2024 Streamline On-chain Compliance: Configure and Customize Anytime 3 June 2024 Multi-Chain Tokenization Made Simple 3 May 2024 Introducing Leandexer: Simplifying Blockchain Data Interaction 3 April 2024 Breaking Down Barriers: Integrated Wallets for Tokenized Securities 1 March 2024 Tokeny’s 2024 Products: Building the Distribution Rails of the Tokenized Economy 2 February 2024 ERC-3643 Validated As The De Facto Standard For Enterprise-Ready Tokenization 29 December 2023 Introducing Multi-Party Approval for On-chain Agreements 5 December 2023 The Unified Investor App is Coming… 31 October 2023 Tokenize securities with us

Our experts with decades of experience across capital markets will help you to digitize assets on the decentralized infrastructure. 

Contact us

The post 56% of Fortune 500 Are Onchain: APIs Are Your Key to Staying Ahead first appeared on Tokeny.

The post 56% of Fortune 500 Are Onchain: APIs Are Your Key to Staying Ahead appeared first on Tokeny.

Thursday, 22. August 2024

Spruce Systems

Debunking Myths about the Mobile Driver's License

Learn about some of the common misconceptions when it comes to mobile driver's licenses (mDLs).

While artificial intelligence is in the spotlight, a quieter technology revolution is underway: a large-scale push to build secure digital identity systems. This is, in part, driven by verifiable digital identity being a complementary technology to AI. With AI-generated text, images, and increasingly convincing videos, having a way to verify something or someone is provably who or what they claim to be will be crucial. The heightened security of encryption-backed identity can dramatically mitigate types of fraud, hacking, and impersonation.

Building digital ID is largely a problem of coordination – getting buy-in for a novel system from everyone from legislators to major enterprises to state agencies. One early leader in contention for defining the digital ID future is a set of standards known as “mDL,” or the Mobile Drivers License – a real, state-issued credential stored on a mobile device. The mDL is just one part of the fast-growing digital identity ecosystem, but it’s being used in our pilot program with the state of California and other pilots across the United States.

You might have some preconceptions about how a driver’s license that lives on a mobile device works based on your familiarity with other digital services, such as logging in to a website. But this new generation of credentials is built much differently, using recent innovations in cryptographic digital signatures.

This makes digital credentials, like a mobile driver’s license, far more secure and private than a web-based service, among other implications. But to understand this new kind of security and privacy, you have to leave behind some old ideas.

The “Photo of a Plastic ID” Myth

A mobile driver's license (mDL) is far more than just a digital image of your physical ID. Unlike a simple photo, an mDL is embedded with cryptographic digital signatures, ensuring that the data it contains is both tamper-evident and provably authentic. This means that anyone verifying your ID, whether in person or online, can trust that the information hasn’t been altered, providing higher security and trust than a static image.

One of the key advantages of mDLs is their versatility in both physical and digital realms. Whether you're verifying your identity in person, such as at a traffic stop or an airport, or over the internet for online services, mDLs offer a seamless digital verification experience. This flexibility is something a static image on your phone just can’t offer, especially as our lives become more intertwined with digital interactions.

While a photo of your ID reveals all your personal details, a significant benefit of mDLs is the ability to share only the necessary information for a specific interaction, rather than revealing all the personal details on your driver's license. For example, if you're buying age-restricted products, the mDL can confirm your age without exposing your address or other sensitive information. This minimal disclosure feature enhances privacy and reduces the risk of identity theft.

Finally, mDLs are built on global standards like ISO/IEC 18013-5 and ISO/IEC 18013-7, which means they can be accepted across industries and borders. A photo of your ID might be accepted in some places, but it lacks the standardization needed for widespread trust and interoperability. These standards ensure that mDLs can be trusted by various entities, from law enforcement agencies to financial institutions, no matter where you are. This broad acceptance and reliability make mDLs a future-proof solution for secure identity verification in our interconnected world.

The “Phone Home” Myth

If you’re still new to the idea of the mobile driver’s license, you might assume they offer less privacy than a hard-copy ID. From bank accounts to college enrollment, we’ve become very used to proving our identity by sending a password to a remote database over the internet. Similarly, you might assume that a mobile driver’s license may require pinging back to a government agency server whenever someone wants to verify your identity. If that were how a mobile driver’s license worked, it would create yet another trail of data that could be used to track you, like many web services do today. This is known as the “phone home” problem.

To be clear, mobile driver's license programs can be implemented in that way, creating (even inadvertently) a new surveillance system. But there are ways to implement mobile driver's licenses that don't have to "phone home," - which is how we approach our implementations at SpruceID in our work with customers.

The mDL standard is ultimately a shared data format, and the systems around it can be built in many ways, but the core mDL architecture can be implemented using an entirely new kind of digital “proof” that checks the validity of an ID issuer’s digital signature locally, called “device retrieval” in the mDL specification. That means no pinging a remote server, and no risky data trail.

Instead, a mobile driver’s license (or other digital credential) is verified by a file on your device. That includes a private digital “signature” proving that it’s from the correct issuer, like the DMV. The signature corresponds to a private key held by the issuing agency that is secret, so no one but the DMV can issue DMV-signed credentials; it’s tied to your specific hardware device, so the file itself can’t be copied; and it’s cryptographically signed to your identity information, so it can’t be tampered with. 

The “Supercookies” Myth

Even if a digital identity check doesn’t create a real-time trail of digital pings over the internet, an ID check can still leave a record on the device or system of the verifier. For instance, when you buy a case of beer, the liquor store might not ping the DMV’s server – but it will probably retain a record of the verification. 

These records can be a risk to your privacy.  If a 3rd party gathers together the scattered records of your ID checks, they can create a record of some of your activities – for instance, how often you visit the liquor store. This is a widespread practice when it comes to records of your web browsing – the collated records of your online activity are known as “supercookies,” and are often used to target you with advertising.

This risk is a good example of how regulation and best practices are necessary complements to new technology – new laws, or reasonable disclosure frameworks, might be needed to ban the practice of making real-world supercookies. However, there’s also a more immediate solution: the issuers of digital credentials can impose data-deletion policies that require verifiers to delete records of identity checks. 

With a few exceptions, such as law enforcement, verifiers should be okay with deleting these records immediately, significantly reducing supercookie risk. Best of all, there are cryptographic methods for proving that the data is actually disposed of.

This is a great example of a key principle in digital credential design. The mobile driver’s license (mDL) is a data standard for digital identity, but many of the systems around that data standard can be designed in many different ways. Some ways of building an mDL system might enable or even encourage archiving data to build a “supercookie,” but systems can also be built to discourage or disallow them. 

By the same token, other digital credential standards, including SD-JWTs and W3C Verifiable Credentials, can also be deployed in ways that enable tracking. In essentially every case, no tech standard can guarantee user privacy; therefore, how the system is designed, and how that design is guided by regulations and agreements, is key.

Technology, Legislation, and Markets In Harmony

Unfortunately, the greater privacy and control enabled by encryption-based digital identity won’t just happen magically. While the technology has the potential to create a more innovative and secure system, the specific way it is built in the coming years will determine whether that potential is fulfilled. 

Many of the teams building these systems have the highest ideals, and are already working to build privacy-preserving features into their structure. But technology alone isn’t enough, in this case, or in general: technology and policy must work in concert to create the future we want.

We believe the best way to guarantee a future identity system that’s both secure and private is legislation that supports the goals of the technology. That legislation, which organizations like the ACLU are currently pushing forward, would bar abuses like surveillance using digital identity – whether for commercial purposes, or more nefarious ones.

We encourage all players in the digital identity space, and potential future users of tools like the mobile driver’s license, to participate in those legislative efforts. Done right, they will help make sure that an exciting new technology supports freedom, safety, and innovation, working together as one.

Are you interested in learning more about digital credentials such as the mobile driver’s license and how they might work for your use case? Explore our website to learn more.

Learn More

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.


IdRamp

MS Entra ID: Advanced Account Recovery with Identity Verification

IdRamp has partnered with Microsoft (MS) to bring Identity Verification (IDV) to the Entra ID account recovery process. Account takeover attacks increased by 350% last year, causing nearly $13 billion in losses. The post MS Entra ID: Advanced Account Recovery with Identity Verification first appeared on Identity Verification Orchestration.

IdRamp has partnered with Microsoft (MS) to bring Identity Verification (IDV) to the Entra ID account recovery process. Account takeover attacks increased by 350% last year, causing nearly $13 billion in losses.

The post MS Entra ID: Advanced Account Recovery with Identity Verification first appeared on Identity Verification Orchestration.

Verida

Verida and Marlin: A Partnership to Power Private AI

Verida and Marlin; Powering the next generation of private AI Verida is on a mission to empower individuals to own and control their data, ultimately enabling a future of private AI. This vision involves creating a decentralized ecosystem where personal data can be securely managed, processed, and utilized without compromising privacy. This collaboration will enable developers to build their
Verida and Marlin; Powering the next generation of private AI

Verida is on a mission to empower individuals to own and control their data, ultimately enabling a future of private AI. This vision involves creating a decentralized ecosystem where personal data can be securely managed, processed, and utilized without compromising privacy.

This collaboration will enable developers to build their own Private AI Assistants. Imagine an AI like ChatGPT, but with 100% end-to-end privacy — working exclusively for you. One private vault, with multiple data sources.

To achieve this, Verida is building a robust infrastructure stack that includes:

Verida Private Data Bridge: Enabling seamless data transfer from various platforms to a user’s Verida vault. Confidential Compute: Utilizing Trusted Execution Environments (TEEs) to create a network of secure, isolated infrastructure nodes for data processing. Private Compute: Building upon confidential compute to provide granular user control over data access, usage and application deployment.

At the heart of Verida’s vision lies private AI, where AI models can be trained and operated on personal data while preserving user privacy. This requires a robust infrastructure capable of handling sensitive data securely and efficiently.

TEEs play a pivotal role in this ecosystem by providing secure, isolated environments for data processing and AI computations. However, deploying and managing TEE-based applications can be complex. This is where Marlin’s Oyster comes in.

Oyster is a TEE coprocessor for AI, designed to simplify the development and deployment of AI applications that require high levels of security and privacy. By leveraging Oyster, Verida will:

Accelerate AI development: Oyster’s platform provides a ready-made infrastructure for deploying confidential AI applications, saving development time and resources for the development of Verida’s confidential compute network. Enhance AI security: Oyster’s TEE-based architecture strengthens the security of AI models and data, protecting sensitive information from unauthorized access and connecting to Verida’s existing confidential storage network. Optimize AI performance: Oyster’s focus on performance can help the Verida confidential compute network deliver faster and more efficient AI experiences for users.

Chris Were, CEO of Verida, expressed his enthusiasm for the partnership:

“We have been very impressed with the Marlin technology and the team as we have collaborated on our PoC over the past several months. There is a significant shortage of privacy-preserving computation options today, so it’s been refreshing to work with a great team and quickly put together a powerful demonstration of what’s possible.”

Esli, Head of Ecosystem at Marlin Foundation, added

“By combining Verida’s technology with Marlin’s confidential compute platform, it is possible to unlock the power of a truly private AI assistant. This solution ensures that users’ personal information remains confidential even when training the assistant on their data.”

This partnership between Verida and Marlin represents a significant step forward in the development of private AI. By combining Verida’s vision with Marlin’s cutting-edge technology, we are creating a foundation for a future where individuals have complete control over their data and how it’s used by AI.

Together, Verida and Marlin are committed to empowering individuals and building a world where data privacy and AI coexist harmoniously.

About Verida Network

Verida is a decentralized network that empowers individuals to take control of their personal data, enabling secure storage, sharing, and management. Verida’s infrastructure supports private AI, decentralized identity (DID), and verifiable credentials, all while ensuring that users maintain ownership and control over their data. Verida’s mission is to create a future where users can harness the power of AI without compromising privacy or security.

For more information, visit Verida Network.

About Marlin

Marlin is a verifiable computing protocol featuring TEE and ZK-based coprocessors to delegate complex workloads over a decentralized cloud. Servers provisioned using smart contract calls host ML models, gateways, frontends, MEV or automation bots, or backends for arbitrary computations using external APIs with baked-in auto-scaling and fault tolerance. Marlin is backed by Binance Labs and Electric Capital.

For more information, visit Marlin

Verida and Marlin: A Partnership to Power Private AI was originally published in Verida on Medium, where people are continuing the conversation by highlighting and responding to this story.


Ocean Protocol

Ocean Nodes Incentives Update: Start Date & Dashboard Upgrades

In this post, we provide important updates on the Ocean Nodes Incentive Program and the rollout of Ocean Nodes Boosters (ONBs) Introduction We’ve been hard at work addressing some issues with the Incentives Dashboard and making sure everything is fair and square for everyone participating in the Ocean Nodes Incentives Program. Today, we’re back with an important update on the program and so

In this post, we provide important updates on the Ocean Nodes Incentive Program and the rollout of Ocean Nodes Boosters (ONBs)

Introduction

We’ve been hard at work addressing some issues with the Incentives Dashboard and making sure everything is fair and square for everyone participating in the Ocean Nodes Incentives Program. Today, we’re back with an important update on the program and some enhancements to the Ocean Nodes Dashboard.

Our commitment to fairness and transparency is driving these changes, and we want to ensure that everyone in the community has a level playing field. Here’s what you need to know.

Incentives Program Update: Why We’re Moving the Start Date

As you noticed, we recently encountered an issue with the monitoring system responsible for tracking node uptime and eligibility for incentives. The great news, and main reason for this blogpost, is to let you know that thanks to our team’s hard work, we’ve swiftly solved the bugs and fixed the logic fault we identified in the eligibility checks.

However, to make sure everything is running smoothly and fairly, we’ve decided to push the start of the incentives program until August 29. This allows us to roll out the necessary backend updates and ensures that our monitoring system is robust and reliable, creating a fair environment for all participants.

Ocean Nodes Dashboard: New Features and Improvements

While we work on the backend updates, we’ve already moved forward and added some improvements to the Ocean Node Dashboard:

Enhanced Table Functionality: We’ve added sorting, filtering, and the ability to select which columns you want to view. This gives you more control over how you interact with the data.

New Columns: “Reward Eligibility” and “Eligibility Issue”:

Reward Eligibility: Indicates if your node is eligible for incentives and ONBs. Eligibility Issue: If your node isn’t eligible, this column will explain why. Currently, you might see messages like “Node cannot be accessed publicly, no public IP announced by the node!” or “Ocean Protocol Foundation node!”. In time we will try to provide more detailed information here.

Please note that the uptime that you see here is for the current epoch, meaning that every Thursday, the uptime will reset to 0 for all nodes. We’re storing all historical data, and in a future update, we’ll introduce more options for data visualization.

However, until we push the backend update, the values in the “Reward Eligibility” column might still be inaccurate. We’re working hard to solve this as soon as possible.

Steps to Install the Node and Be Eligible for Rewards

To help you get started and ensure your node is eligible for rewards, follow these steps:

Find your public IP: You’ll need this for the configuration. You can easily find it by googling “my IP” Run the Quickstart Guide: If you’ve already deployed a node, we recommend either redeploying with the guide or ensuring that your environment variables are correct and you’re running the latest version Get your Node ID: After starting the node, you can retrieve the ID from the console Expose Your Node to the Internet: From a different device, check if your node is accessible by running
telnet {your ip} {P2P_ipV4BindTcpPort}

2. To forward the node port, please follow the instructions provided by your router manufacturer — ex: Asus, TpLink, Huawei, Mercusys etc.

Verify eligibility on the Ocean Node Dashboard: Check https://nodes.oceanprotocol.com/ and search for your peerID to ensure your node is correctly configured. Considerations

As Ocean Nodes are currently in an alpha stage, please remember to:

Regularly update your deployment to maximize uptime. Account for potential issues such as node bugs*, internet disruptions, and more when measuring uptime. *Report bugs in our dedicated Discord channel so we can address them as soon as possible. When reporting, please include useful information such as the environment variables (excluding private keys), hardware specifications, and relevant logs. Please remember NOT to share your private key with anybody. Note: The current uptime may not be accurate as we’ve been testing and the monitoring system has been off multiple times. The uptime will reset on Thursday, August 29, at 00:00 UTC. Ocean Nodes Boosters (ONBs): Criteria and Distribution

For Phase 1 ONBs, which will grant a 1.5 rewards multiplier, we will consider node uptime. Starting on August 29, we’ll begin tracking uptime across the first four epochs, which will run from August 29 to September 26.

At the end of this period, the top 50 nodes with the highest uptime will each receive a Phase 1 ONB. If multiple nodes have the same uptime, we’ll mint additional ONBs to ensure that no one is left out.

To qualify for ONBs and incentives, your node must meet the following criteria:

Public Accessibility: Nodes must have a public IP address API and P2P Ports: Nodes must expose both HTTP API and P2P ports to facilitate seamless communication within the network Conclusion

We appreciate your patience and understanding as we work through these updates. Our goal is to ensure that the Ocean Nodes Incentives Program is fair and rewarding for everyone involved. Thank you for your continued support!

Stay tuned for more updates by following us on X and joining the discussion in our Discord Server.

About Ocean Protocol

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Ocean Protocol is a founding member of the ASI Alliance.

Follow Ocean on Twitter or Telegram to keep up to date, and Predictoor’s Twitter for its news. Chat directly with the Ocean community on Discord. Track Ocean’s tech progress directly on GitHub.

Ocean Nodes Incentives Update: Start Date & Dashboard Upgrades was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Ockto

Open Banking & PSD2; Over o.a. inkomensverificatie met banktransacties

Podcast Open Banking & PSD2 Over o.a. inkomensverificatie met banktransacties In deze aflevering van de Data Sharing Podcast duiken we in de wereld van Open Banking en PSD2. Open Banking stelt consumenten en bedrijven in staat om financiële gegevens te delen met derde partijen, wat nieuwe kansen biedt voor innovatie en dienstverlening binnen de financiële sector.  Dankzij O
Podcast Open Banking & PSD2
Over o.a. inkomensverificatie met banktransacties

In deze aflevering van de Data Sharing Podcast duiken we in de wereld van Open Banking en PSD2. Open Banking stelt consumenten en bedrijven in staat om financiële gegevens te delen met derde partijen, wat nieuwe kansen biedt voor innovatie en dienstverlening binnen de financiële sector. 

Dankzij Open Banking kunnen organisaties snel en nauwkeurig inkomensgegevens van potentiële klanten verifiëren, wat leidt tot efficiëntere en betrouwbaardere kredietbeoordelingen en andere (financiële) diensten.


Ocean Protocol

DF103 Completes and DF104 Launches

Predictoor DF103 rewards available. DF104 runs Aug 22 — Aug 29, 2024 1. Overview Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor. Data Farming Round 103 (DF103) has completed. DF104 is live today, Aug 22. It concludes on August 29. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE re
Predictoor DF103 rewards available. DF104 runs Aug 22 — Aug 29, 2024 1. Overview

Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor.

Data Farming Round 103 (DF103) has completed.

DF104 is live today, Aug 22. It concludes on August 29. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE rewards.

2. DF structure

The reward structure for DF104 is comprised solely of Predictoor DF rewards.

Predictoor DF: Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn.

3. How to Earn Rewards, and Claim Them

Predictoor DF: To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs.

4. Specific Parameters for DF104

Budget. Predictoor DF: 37.5K OCEAN + 20K ROSE

Networks. Predictoor DF applies to activity on Oasis Sapphire. Here is more information about Ocean deployments to networks.

Predictoor DF rewards are calculated as follows:

First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards. Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards.

Expect further evolution in DF: adding new streams and budget adjustments among streams.

Updates are always announced at the beginning of a round, if not sooner.

About Ocean, DF and Predictoor

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord. Ocean is part of the Artificial Superintelligence Alliance.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

DF103 Completes and DF104 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.

Wednesday, 21. August 2024

Thales Group

Thales and L&T Technology Services expand collaboration to provide innovative business models to customers

Thales and L&T Technology Services expand collaboration to provide innovative business models to customers prezly Wed, 08/21/2024 - 15:00 New contract builds on 20-year relationship with engineering and R&D firm, L&T Technology Services Limited (LTTS) Strategic collaboration will see LTTS use the Thales Sentinel platform to help customers monetize their software wit
Thales and L&T Technology Services expand collaboration to provide innovative business models to customers prezly Wed, 08/21/2024 - 15:00 New contract builds on 20-year relationship with engineering and R&D firm, L&T Technology Services Limited (LTTS) Strategic collaboration will see LTTS use the Thales Sentinel platform to help customers monetize their software with flexible licensing

Thales, a leading global technology and security provider, has announced a new contract with engineering and R&D firm L&T Technology Services Limited (LTTS). This partnership will bring Thales’s software monetization platform, Thales Sentinel, to LTTS’ customer base, especially in the Hi-tech, Sustainability and Mobility segments.

With over two decades of experience in deploying intelligent digital solutions, LTTS leads in optimizing enterprise operations and pioneering platforms in AI, Mobility, Sustainability, and Hi-Tech. LTTS’ advanced AI offerings in next-gen mobility and smart networks are pivotal in building robust digital infrastructures, enhancing safety, efficiency, and sustainability. Now with the Thales Sentinel software licensing and entitlement platform, LTTS will enable its customers to monetize its software solutions by harnessing recurring revenue business models including agile subscriptions and flexible usage-based pricing models.

Damien Bullot, Vice President, Software Monetization Solutions at Thales: “This partnership builds on Thales’s long standing relationship with LTTS, helping their customers unlock the true value and potential of their software products through flexible pricing and subscription models, better compliance, and automated delivery and activation. We look forward to our continued collaboration to ensure their industry-leading AI offers are properly protected and monetized for maximum ROI.”

Under the new contract, LTTS will resell the Thales Sentinel platform to its customer base and group affiliates globally across diverse sectors, including transportation, medical, hi-tech, telecom, and financial services.

Alind Saxena, Executive Director & President, Mobility & Tech at L&T Technology Services: “Our partnership with Thales underscores our commitment to drive innovation and superior solutions across mobility, sustainability, and hi-tech, leveraging their Sentinel platform. Our proficiency in creating cutting-edge digital solutions and our deep understanding of AI will form the backbone of a robust digital infrastructure. We anticipate contributing to a cohesive digital thread throughout the value chain, accelerating market entry, reducing cycle time along with product development costs, and aiding our customers in their journey towards sustainability through improved software monetization potential."

About Thales

Thales (Euronext Paris: HO) is a global leader in advanced technologies specialized in three business domains: Defence & Security, Aeronautics & Space, and Cybersecurity & Digital identity.

It develops products and solutions that help make the world safer, greener and more inclusive.

The Group invests close to €4 billion a year in Research & Development, particularly in key innovation areas such as AI, cybersecurity, quantum technologies, cloud technologies and 6G.

Thales has close to 81,000 employees in 68 countries. In 2023, the Group generated sales of €18.4 billion.

/sites/default/files/prezly/images/L%26T.jpg Documents [Prezly] Thales and L&T Technology Services expand collaboration to provide innovative business models to customers.pdf Contacts Marion Bonnet, Press and social media manager, Security and Cyber 21 Aug 2024 Digital Identity and Security software monetization Type Press release Structure Digital Identity and Security Thales, a leading global technology and security provider, has announced a new contract with engineering and R&D firm L&T Technology Services Limited (LTTS). This partnership will bring Thales’s software monetization platform, Thales Sentinel, to LTTS’ customer base, especially in the Hi-tech, Sustainability and Mobility segments. prezly_682117_thumbnail.jpg Hide from search engines Off Prezly ID 682117 Prezly UUID b03a0075-3798-4193-b34b-f8f526a7eb61 Prezly url https://thales-group.prezly.com/thales-and-lt-technology-services-expand-collaboration-to-provide-innovative-business-models-to-customers Wed, 08/21/2024 - 17:00 Don’t overwrite with Prezly data Off

Elliptic

The US stablecoin landscape: leveraging Ecosystem Monitoring to build trust

The United States policy and regulatory landscape remains in significant flux when it comes to the topic of stablecoins. 

The United States policy and regulatory landscape remains in significant flux when it comes to the topic of stablecoins. 


Lockstep

What do verifiable credentials verify?

Verifiable credentials are one of the most important elements of digital identity today. What exactly does a verifiable credential verify? And while we’re on the subject, what is a credential anyway? Let’s start with existing analogue credentials. Thanks to English, “credential” can be a verb or a noun. And the noun can take two or... The post What do verifiable credentials verify? appeared firs

Verifiable credentials are one of the most important elements of digital identity today.

What exactly does a verifiable credential verify?

And while we’re on the subject, what is a credential anyway?

Let’s start with existing analogue credentials. Thanks to English, “credential” can be a verb or a noun. And the noun can take two or three very different meanings.

Photo credit: Akbar Nemati via Pexels.

Credentialing

The noun credential usually refers to “a qualification, achievement, quality or aspect of a person’s background, especially when used to indicate their suitability for something” (Ref: Oxford Languages).

There’s a subtle implication in the everyday sense of the word: a credential is generally associated with the criteria for its particular quality and suitability.

Consider professional credentials.  A budding accountant for instance must obtain a particular degree by passing certain tests set by a university; in addition, that degree needs to be deemed suitable by a professional accounting body.

So in this sense, every credential is an abstraction which represents that the holder has satisfied certain rules. A credential has meaning and context.

As a verb, “credential” means to provide someone with credentials.  This might seem obvious, but I think it’s the more important sense of the word. A credentialing process is a formal (rules-based) sequence of events, which has usually been designed to establish the holder’s suitability to undertake specific activities. There is a tight relationship between the credentialing process and the intended use of the credential.

Examples include the onboarding of new employees, enrolment in university courses, admission to professional associations (including recognition of international qualifications), approval of journalists to attend special events such political conventions, security clearances, and nations’ citizenship requirements.

Credentialing processes are famously conservative. They are the sovereign stuff of nations, academic institutions, and professional societies. Right or wrong, professional credentials are notoriously provincial and difficult to have recognised between different jurisdictions. Credentialling bodies zealously represent communities of interest and reserve the right to set rules as they see fit.

Going from physical to digital credentials

Traditionally, many credentials have been physically manifested as cards, membership tokens and other badges, used by the holder to prove their status to other parties who need to know. These items provide a number of familiar cues to assure us that a credential is genuine, the issuer is legitimate, and the credential hasn’t been modified. Some include photographs which help to show that the credential is in the right hands when presented.

By the way, the plastic card itself is sometimes called a “credential”, but it is more useful to think of it as a carrier or container of credentials, especially as we shift from analogue to digital.

Yet in the move to digital, most credentials in the abstract sense have retained their essential meaning. For example, a government authorised Medicare provider or licenced plumber should be able to assert precisely the same authority in any of their digital workflows—nothing less and nothing more—as they do in the real world.

Credit cards as credentials

A credit card is a token which signifies that the holder is a paid-up member of a payment scheme. The principal data carried by a credit card is a specially formatted number (known as the Primary Account Number or PAN) which encodes membership of the scheme, identifying the cardholder, the scheme and the issuing bank. Note that a credit card is a container that usually carries just one credential.

Credit card numbering has remained unchanged for decades. With the introduction of electronic commerce, shoppers were able to use their card numbers online, thanks to Mail Order / Telephone Order (MOTO) rules. These has been established years before e-commerce, to allow merchants to accept plaintext card numbers in card-not-present (CNP) settings.

To combat CNP fraud, the Card Verification Code (CVC) was introduced — an additional number on the back of the credit card that would not be registered by merchants’ card imprinting machines and then vulnerable to dumpster diving identity thieves.

The CVC is a classic example of security metadata — an additional signal used to confirm the data that really matters, namely the credit card number. Credit card call centre operators had access to back-office lists of PANs and matching CVCs; if a caller could quote the CVC correctly, it was assumed they had the physical card in their hands.

Enter cryptography

Verifiable credentials (sometimes “VCs” for short) are the strongest mechanism today for asserting important personal attributes, such as driver licences, professional qualifications, vaccinations, proof of age, payment card numbers and so on. VCs are central to the next generation European Union Digital Identity (EUDI), the ISO 18013-5 standard mobile driver licences (mDLs) and the latest digital wallets.

Several new VC data structure standards are under development, including the World Wide Web Consortium (W3C) VC data model and ISO 18013-5 mdocs.

All forms of VC include the following:

information about a particular “Subject” (usually a person, also referred to as the credential holder) such as a licence number or other credential ID a name for the Subject (typically a legal name but pseudonyms are sometimes possible) the digital signature of the issuer usually a public key of the Subject (used to verify signed presentations of the VC made from a cryptographic container or wallet) metadata about the credential (such as its validity period and the type of container it is carried in) and metadata about the issuer (such as a company legal name, corporate registration number, Ts&Cs for credential usage etc.).

The digital signature of the issuer preserves the provenance of a verifiable credential: anyone relying on the VC can be assured of its origin and be confident that the credential details have not been altered.

When a VC is presented from a cryptographically capable wallet, a message or transaction incorporating the credential can also be digitally signed using a private key unique to the credential. This assures the receiver that the credential as presented was in the right hands.

Verifiable presentation proves the proper custody and control of the credential and is just as important as verifiability of a credential’s origin.

Telling the story behind the credential

Provenance and secure custody are unique assurances provided by verifiable credentials, but I think the greater power of this technology lies in the depth of the metadata.

VCs deliver rich ‘fine print’ about the credential, the issuer, the wallet and the way in which it was presented, all reliably bound together through digital signatures. So whenever you use a VC to access a resource or sign a piece of work, you leave behind an indelible mark that codifies the history of your credential.

As mentioned, a credential is issued through a formal process, and is recognised by a community of interest as signifying the suitability of its holder for something.

For a person to hold a verifiable credential in a personal cryptographic wallet, a series of specific steps must have taken place.

First and foremost, the Issuer will satisfy itself that the Subject meets all the credentialling requirements. A VC usually carries a public key unique to the Subject and their wallet; this physicality means the Issuer can be sure that it hands out its credentials only to the correct individuals. It also allows the Issuer to specify the precise type of device(s) used to carry its credentials — all the way down to smart phone model and biometric performance if those things matter under the Issuer’s security policy.

Virtual credit cards in digital wallets

Continuing our look at credit cards as credentials, the provisioning of virtual credit cards to mobile wallets illustrates the degree of control that a VC issuer has over the end-to-end process.

Typically, a virtual credit card is provisioned to a digital wallet via a mobile banking app running on the same device. Banks control over how their apps are activated. Almost anyone can download a banking app from an app store but only a genuine customer can get the app to do anything, following their bank’s prescribed activation steps (which might include e.g. entering account specific details, calling a contact centre, or even visiting a branch for additional checks). Only then will the bank send secure instructions to the device to load a virtual card. The customer will need to unlock their phone (by biometric or PIN) to complete the load.

Behind the scenes, any bank offering mobile phone credit cards must have also made prior arrangements with the phone manufacturer to gain access to the hardware. Apple and Google (the major digital wallet platforms) undertake rigorous due diligence so that only legitimate banks are granted this all-important power.

All this history is coded as metadata into the verifiable credential. When a merchant point-of-sale system receives a signed payment instruction from a digital wallet, we can all be sure that:

the digital wallet has been unlocked by someone who controls the phone the credit card is genuine and was issued by the bank indicated in the credential the card was loaded to the wallet by a customer who was approved to use the mobile banking app and was authenticated to do so (making it highly likely that the mobile phone customer and the cardholder are the same person) the cardholder is a registered customer of the bank and has passed that bank’s KYC processes.

The VC can include the type of phone it is carried in; it is even possible for the VC to record if the virtual card was issued remotely or in-person.

Minimalist VCs

The acute problem with online authentication today—often given the catch-all label “identity theft”— arises from the use of plaintext credentials and identifiers.

There are countless scenarios where a counterparty needs to know you have a particular credential, but if the only evidence you can provide is a plaintext number, then businesses and individuals alike are sitting ducks because so many identifiers have been stolen in data breaches and traded on black markets.

The simplest, lowest risk solution is to conserve the important IDs we are all familiar with, but harden them in digital form, so they cannot fall into criminal hands.

That might sound complicated, but we have done it before!

The transition from magnetic stripe to chip payment cards was made for exactly the same reason: to eliminate plaintext data.  Chip cards present cardholder data through digitally signed verifiable messages — making them one of the earliest examples of verifiable credentials.

Digital wallets use the same technology as chip cards and are rapidly taking over from plastic. The Reserve Bank reports that well over one third of card payments by Australian consumers are now made through mobile wallets. Yet as we have seen, the meaning and business context of credit cards were unchanged through the course of these technology upgrades. That conservation of credentialing processes was key to the chip revolution.

Minding your business

In any digital transformation, it is not the new technology that creates the most cost, delay and risk; rather it’s the business process changes. The greatest benefit of verifiable credentials is they can conserve the meaning of the IDs we are all familiar with, and all the underlying business rules.

The real power of VCs lies not in what they change but what they leave the same!

A minimalist verifiable credential carrying a government ID means nothing more and nothing less than the fact that the holder has been issued that ID. By keeping things simple, a VC avoids disturbing familiar trusted ways of dealing with people and businesses.

Powerful digital wallets are being rapidly embraced by consumers; modern web services are able to receive credentials from standards-based devices. We are ready to transform all important IDs from plaintext to verifiable credentials. Most people now could present any important verified data with a click in an app, with the same convenience, speed and safety as showing a payment card. With no change to backend processes and credentialing, we would cut deep into identity crime and defuse the black market in stolen data.

The post What do verifiable credentials verify? appeared first on Lockstep.

Tuesday, 20. August 2024

Spruce Systems

SpruceID Joins NIST National Cybersecurity Center of Excellence (NCCoE) to Accelerate Mobile Driver’s License Adoption

Learn about the current initiative, benefits of the mobile driver's license, and how SpruceID will collaborate with the NCCoE.

SpruceID is participating in the National Cybersecurity Center of Excellence (NCCoE) Accelerate Adoption of Digital Identities on Mobile Devices Consortium. This initiative will help define and facilitate a reference architecture for digital credentials that protect privacy, are implemented securely, enable equity, are widely adoptable, and are easy to use.

Understanding the Initiative

The National Institute of Standards and Technology (NIST) National Cybersecurity Center of Excellence (NCCoE) is a collaborative hub where industry, organizations, government agencies, and academic institutions work together to address businesses’ most pressing cybersecurity challenges.

The NCCoE is playing a pivotal role in expediting the adoption of mobile driver's license (mDL) standards and best practices. In partnership with technology vendors (including SpruceID), government agencies, regulatory bodies, standards organizations, and entities aiming to implement mDLs, the NCCoE is kicking off an initiative to build a reference architecture that showcases practical, real-world business use cases. This initiative will integrate mDLs with commercially available technologies and embed them into existing business processes:

“Whether boarding a plane, creating a bank account, or making an online purchase, mobile driver’s licenses (mDLs) and other digital credentials have the potential to improve the way we conduct transactions, both in person and online. To help realize this potential, the NCCoE is collaborating with more than a dozen partners from across the mDL ecosystem to build out reference implementations and to accelerate the adoption of mDL standards and best practices.” 

- Bill Fisher, co-lead of the NIST mDL project, NIST National Cybersecurity Center of Excellence

This reference implementation aims to promote standards and best practices for mDL deployments and address mDL adoption challenges. Over the next two years the project will produce guidance addressing:

Know Your Customer/Customer Identification Program Onboarding and Access which will demonstrate the use of an mDL and/or Verifiable Credentials (VC) for establishing and accessing an online financial account.  U.S. Federal Government Credential Service Provider (CSP) and Federation which will demonstrate the use of an mDL and/or VC for establishing a CSP account to access federated agency systems. Healthcare and Electronic Prescribe which will demonstrate the use of an mDL and/or VC for provider access and prescription uses. Benefits of the Mobile Driver’s License

Physical driver’s licenses were not designed for our online world. The current best practice for online identity verification asks users to take a picture of their driver’s license with a smartphone and to answer knowledge-based questions. The efficacy of these methods is being eroded by new technology, such as AI-generated images of driver’s licenses accurate enough to bypass document scanning tools and the ability of bad actors to get ahold of the information needed to answer knowledge-based questions.

mDLs function much like a traditional driver's license, carrying information such as name, date of birth, and address but in a digital format accessible through a dedicated mobile application, often referred to as a digital wallet. Compared to physical driver’s licenses, mDLs have several capabilities that make them easier to use with online and digital transactions:

mDLs are underpinned by public key cryptography, making the credential cryptographically verifiable. mDLs can be integrated natively with device biometrics for user verification. mDLs can communicate natively between two mobile applications but also in cross device flows between mobile applications and the web browser on a laptop or tablet. mDLs offer the potential for selective disclosure, allowing users to pick and choose which information to share with third parties.

Transactions at financial institutions, healthcare providers, government services, and many other organizations could benefit from enhanced customer experiences, more accurate identity verification, and reduced fraud if they supported mDLs.

How SpruceID will Collaborate with NCCoE

SpruceID is proud to have been selected to partner with the NCCoE to expedite the adoption of mobile driver’s license standards and best practices. Several of our contributions to this project will include:

Coordinate and collaborate with other parties to demonstrate success for the Financial Services Sector CIP/KYC use case, serving the primary role of a Wallet Provider. The use of our open-source libraries, including the SpruceKit Wallet, an application holding mDoc and Verifiable Credential that can interact over the internet and app-to-app using 18013-7 and OpenID4VP. Bring our expertise and learnings from interoperability test events that we previously hosted for ISO/IEC 18013-7 in August 2023 and from the development and deployment of the California DMV mobile driver’s license application.

We look forward to leveraging our unique knowledge and expertise to help drive this initiative forward.

Stay up to Speed

Interested in learning more and staying up to date with major milestones? Attend upcoming mDL events and follow along for updates on the NCCoE website mDL home page.

Attend Upcoming Events

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.


KuppingerCole

Some Direction for AI/ML-ess Marketing

by John Tolbert For the last few years, we have been inundated with messaging about Artificial Intelligence (AI). AI is no longer a term mostly used by academicians, IT professionals, or sci-fi fans. Those in the IT security field have seen AI, ML (Machine Learning), and Generative AI (GenAI) proliferating in marketing, while product developers look for ways to incorporate these technologies into

by John Tolbert

For the last few years, we have been inundated with messaging about Artificial Intelligence (AI). AI is no longer a term mostly used by academicians, IT professionals, or sci-fi fans. Those in the IT security field have seen AI, ML (Machine Learning), and Generative AI (GenAI) proliferating in marketing, while product developers look for ways to incorporate these technologies into products. Vendors touting some variation of artificial intelligence in their products have garnered more investment. There have been productivity gains. But has “AI/ML” as a marketing term peaked?

A recent study in the Journal of Hospitality Marketing & Management, titled “Adverse impacts of revealing the presence of “Artificial Intelligence (AI)” technology in product and service descriptions on purchase intentions: the mediating role of emotional trust and the moderating role of perceived risk” shows that consumers are put off by the use of “AI” in product marketing. Some of the reasons cited include a lack of trust for AI, a lack of transparency about AI usage, and concerns about privacy. Although this study focused on consumer goods and services, do the lessons learned apply to IT, and specifically cybersecurity?

 I recently returned from Black Hat 2024 in Las Vegas. While there was plenty of AI, ML, and GenAI signage in booths on the show floor, how vendors are marketing these technologies in products seems to be shifting a bit. Security practitioners are and have been aware of the presence and need for machine learning in products for many years. An example isthe use of ML detection models in Endpoint Protection Detection and Response (EPDR) products to identify new variants of malware. It is infeasible to build an EPDR solution today that does NOT use ML, given the volume of malware variants discovered every day. AI/ML is not new in the market, and it is not new to those of us working in the field. Perhaps this realization among product marketing teams is another reason why the messaging is changing and needs to evolve further.

2023 was certainly the year of GenAI, with large language models (LLM) capturing not only the attention of the public but also becoming mainstream tools. Vendors large and small rushed to find ways to get GenAI into products. Such objectives are innovative, and can result in improvements in usability, but not always. Customers of IT security solutions may be skeptical about unqualified claims of how GenAI improves those products.

Continuing with the EPDR example, several vendors have natural language query interfaces powered by GenAI, guided investigation tools for analysts informed by AI, and executive level reports drafted by GenAI. These have the potential to save time and improve organizational security posture for customers. However, there are concerns about the quality of the output. Can it be trusted? AI outputs have explainability problems. Moreover, since the outputs from AI tools depend on the quality and relevance of the data in their models, how are security vendors getting a sufficient quantity of relevant data, and how do they assess the veracity of the outputs of their LLM functions? How can customers be assured that data governance and security policies are applied to the data from their organizations?

In discussing LLMs, how they work, and answering questions about whether LLMs lie or hallucinate in the Journal of Ethics and Information Technology, Hicks, Humphries, and Slater state that LLMs are “not designed to represent the world at all; instead, they are designed to convey convincing lines of text.” In the proceedings of the 2022 Conference on Human Information Interaction and Retrieval, Bender and Shah said about LLMs: “No reasoning is involved […]. Similarly, language models are prone to making stuff up […] because they are not designed to express some underlying set of information in natural language; they are only manipulating the form of language.”

At this point, IT (and especially IT security) vendors and their product marketing teams would be better served by providing more information about their use of ML and GenAI in their solutions. Assume you have a tech savvy audience, because you do. What kinds of AI technology are you using? For which functions is it being used? Where are you getting data for model training? How are you doing quality control on the outputs before releasing it customers? These are the kinds of questions that buyers of security solutions have.

Join us in December in Frankfurt at our cyberrevolution conference, where we will continue to dissect how AI is used in cybersecurity.

See some of our other articles and videos on the use of AI in security:

Cybersecurity Resilience with Generative AI Generative AI in Cybersecurity – It's a Matter of Trust ChatGPT for Cybersecurity - How Much Can We Trust Generative AI? Asking Good Questions About AI Integration in Your Organization Asking Good Questions About AI Integration in Your Organization – Part II

Thales Group

Itaú Unibanco innovates with a credit card specifically designed to meet the needs of visually impaired people

Itaú Unibanco innovates with a credit card specifically designed to meet the needs of visually impaired people prezly Tue, 08/20/2024 - 18:42 Sao Paulo - Itau Unibanco and Thales, a global leader in advanced technologies, are launching the first Voice Payment Card. Created in conjunction with Handsome, a fintech specialist in inclusion, the card has been specially developed to help
Itaú Unibanco innovates with a credit card specifically designed to meet the needs of visually impaired people prezly Tue, 08/20/2024 - 18:42

Sao Paulo - Itau Unibanco and Thales, a global leader in advanced technologies, are launching the first Voice Payment Card. Created in conjunction with Handsome, a fintech specialist in inclusion, the card has been specially developed to help people with some kind of visual impairment at each stage of the transaction, by vocalizing the amount of the purchase to the user before they enter the password. The feature also informs the customer that the transaction has been completed.

'We have a strong commitment to diversity and inclusion. That's why we've developed this card so that we can help thousands of the bank's customers make their card payments with much more autonomy and security in face-to-face transactions,' explains Mario Miguel, Payments Director at Itau Unibanco.

Thales' exclusive technology works via a credit card connected via Bluetooth to a mobile app available on the Apple Store (iPhone) and Play Store (Android). Upon receiving the card, the user simply brings it closer to the cell phone with the Bluetooth enabled to pair them securely. Thus, when making any payment, the connected card communicates the amount to the Thales app, which is voiced to the user by the phone's audio before any validation. When the user enters the password into the machine to authorize the payment, the app notifies the user that the password has been entered correctly and that the transaction has been confirmed.

According to Gustavo Daniel, Thales' Head of Banking and Payment Services Sales for Brazil, the Voice Payment Card project was driven by the purpose of contributing to a more inclusive society.

'As a leader in innovative and more responsible payment solutions, Thales aims to provide convenience and autonomy to as many people as possible. We are proud to share the same goal with Itau, and to meet the needs of our customers,' he adds.

Itau is a pioneer in the initiative and, together with Thales, invited visually impaired people to test the Voice Payment Card technology and collect comments and suggestions for improvement.

'Our approach to cultural transformation focused on co-creating solutions with the customer at the center of our decisions allows us to move forward to offer the best experiences. This is an infinite journey, which aims to be increasingly aligned with the diverse needs of Itau's customers,' adds Lineu de Andrade, Operations Director at Itau Unibanco.

Thales is a leader in payment innovation exploring more responsible solutions, offering a wide selection of eco-friendly bank cards (made from bio-sourced or recycled plastic materials), as well as more inclusive options, to ensure the accessibility of payment experiences for anyone.

 

About Thales

Thales (Euronext Paris: HO) is a global leader in advanced technologies specialized in three business domains: Defence & Security, Aeronautics & Space, and Cybersecurity & Digital identity.

It develops products and solutions that help make the world safer, greener and more inclusive.

The Group invests close to EUR4 billion a year in Research & Development, particularly in key innovation areas such as AI, cybersecurity, quantum technologies, cloud technologies and 6G.

Thales has close to 81,000 employees in 68 countries. In 2023, the Group generated sales of EUR18.4 billion.

/sites/default/files/prezly/images/Unplash.jpg Contacts Vanessa Viala - Digital Identity & Security Press Officer 20 Aug 2024 Digital Identity and Security Banking & Payment cards Type Press release Structure Digital Identity and Security Brazil With exclusive technology from Thales, the bank is a pioneer in the initiative, allowing visually impaired customers to know the amount of the purchase on their credit card before entering the password, avoiding fraud or errors. prezly_682271_thumbnail.jpg Hide from search engines Off Prezly ID 682271 Prezly UUID 94fe0726-9186-4be2-bb85-face4b6c4017 Prezly url https://thales-group.prezly.com/itau-unibanco-innovates-with-a-credit-card-specifically-designed-to-meet-the-needs-of-visually-impaired-people Tue, 08/20/2024 - 20:42 Don’t overwrite with Prezly data Off

Elliptic

Crypto regulatory affairs: Fed undertakes enforcement against Customers Bank for digital asset risk management gaps

The Federal Reserve Board has sent a warning to banks about the importance of addressing cryptoasset risk exposure through a recent and landmark enforcement action.

The Federal Reserve Board has sent a warning to banks about the importance of addressing cryptoasset risk exposure through a recent and landmark enforcement action.


1Kosmos BlockID

Four Ways to Align Authentication with Business Needs

In a hybrid world that blends on-premises and cloud-based resources, securing access to sensitive data and systems is no longer achieved by defending a perimeter, but through authentication. While authentication technologies have evolved over the past decades from their humble password origins, preventing unauthorized access still hinges on choosing and implementing the right identity-based contro

In a hybrid world that blends on-premises and cloud-based resources, securing access to sensitive data and systems is no longer achieved by defending a perimeter, but through authentication. While authentication technologies have evolved over the past decades from their humble password origins, preventing unauthorized access still hinges on choosing and implementing the right identity-based controls.

This involves navigating a landscape where knowledge-based, possession-based, biometric, and multi-factor authentication (MFA) methods offer a variety of advantages and limitations. Let’s consider each of the options available to organizations and how to select the right mix of controls to improve their security posture.

Knowledge-Based Authentication

Knowledge-based authentication (KBA), which encompasses passwords and PINs, is the most traditional form of authentication. Its widespread adoption and user familiarity make it a convenient starting point for many security protocols. However, its susceptibility to social engineering, phishing attacks, and the perennial issue of weak password creation by users necessitate a cautious approach. For environments where ease of use is paramount and risk levels are comparatively low, KBA can serve as a component of a more comprehensive security strategy, particularly when augmented with additional authentication factors.

Knowledge-based authentication (KBA) is best suited for environments with comparatively low risk levels, where ease of use is paramount and the accessed information is not highly sensitive or critical. It can serve as a supplementary authentication factor in conjunction with other methods, such as biometric or device-based authentication. Examples include accessing non-critical information, utilizing KBA alongside other authentication methods as a first factor, and implementing it in public Wi-Fi hotspots for streamlined user access without compromising security.

Possession-Based Authentication

Possession-based authentication methods require users to have a physical object, such as a security token or a mobile device, to gain access. This approach adds a tangible layer of security, making it harder for attackers to gain unauthorized access without physical possession of the required object. It’s particularly effective in scenarios where additional security is needed without significantly complicating the user experience, such as in financial transactions or access to high-security areas. However, the risk of loss or theft and the potential cost implications of deploying hardware devices must be considered.

Possession-based authentication methods offer heightened security measures for a range of scenarios, including financial transactions, remote work access, secure online transactions, and compliance-driven environments like legal and government agencies. In online banking, users require physical possession of a security token or mobile device to access their accounts securely. Similarly, in remote work settings, this method ensures that only authorized employees with designated devices can connect to corporate networks and sensitive data, mitigating risks associated with unauthorized access. Additionally, in e-commerce platforms and online payment systems, possession-based authentication enhances transaction security, reducing the risk of fraud and protecting sensitive financial information. Furthermore, compliance-driven industries can benefit from this approach to meet regulatory obligations and safeguard confidential information.

Biometrics

Biometric authentication offers a high-security level by utilizing unique user characteristics like fingerprints, facial recognition, or iris scans. This method is highly resistant to traditional hacking attempts and provides a seamless user experience. It is well-suited for environments where security cannot be compromised, such as in government or healthcare settings. Nevertheless, concerns around privacy, the potential for spoofing, and the need for compatible hardware investments can pose challenges. Organizations must weigh these factors against the critical need for secure and user-friendly authentication mechanisms.

Biometric authentication, which leverages unique user characteristics like fingerprints, facial recognition, or iris scans, is ideal for various high-security environments. It is best suited for secure access to sensitive data and fortifying high-risk online systems. Despite its advantages, organizations must consider privacy concerns, potential spoofing, and compatible hardware investments when deploying biometric authentication systems.

Multi-Factor Authentication (MFA)

MFA combines two or more authentication methods listed above to create a layered security approach, significantly enhancing protection against various threats. By integrating knowledge, possession, and biometric factors, MFA creates a dynamic defense mechanism that is much harder for attackers to bypass. This method is ideal for protecting sensitive data and critical systems, offering a balanced solution that addresses the vulnerabilities inherent in single-method authentication systems. While MFA introduces complexity and potential user resistance, its ability to significantly reduce security risks makes it a vital component of modern cybersecurity strategies.
Multi-factor authentication (MFA) is a versatile security method that finds applications across industries, serving to protect sensitive data and critical systems. More commonly, MFA is required to ensure secure access to corporate systems from outside the office, and in e-commerce platforms to safeguard customer accounts and high-risk customer and citizen transactions. Overall, MFA provides a defense mechanism against various threats, combining multiple authentication factors to significantly enhance security and mitigate risks inherent in single-method authentication systems.

Passwordless

Passwordless authentication represents a significant leap forward in cybersecurity, eliminating the vulnerabilities associated with traditional knowledge-based methods. The majority of authentication methods included in the above still require a user name AND password as a first step in authenticating users. But, by leveraging biometrics, mobile devices, or security keys, passwordless systems offer a user-friendly and highly secure alternative that reduces the risk of phishing, password theft, and unauthorized access. This method is particularly advantageous in creating a seamless user experience without compromising security, and ideal for environments aiming to minimize friction while maintaining high security standards. Organizations looking to bolster access security while enhancing user satisfaction should consider integrating passwordless authentication into their strategic security framework, offering an optimal balance between ease of use and robust protection.
Organizations across diverse sectors, particularly those looking for a better, more secure user experience, should carefully consider integrating passwordless authentication into their security frameworks. By leveraging biometrics, mobile devices, or security keys, passwordless systems offer a robust and user-friendly alternative to traditional password-based methods, effectively mitigating the risks associated with phishing, password theft, and unauthorized access. This approach not only enhances security posture but also fosters a seamless and efficient user experience, aligning with the modern landscape of digital operations where stringent security measures and user satisfaction are paramount.

Choosing the Right Strategy

The choice of authentication method should be driven by an organization’s specific needs, considering factors such as the sensitivity of the data, user experience requirements, and regulatory compliance mandates. Here are four key considerations for selecting the appropriate authentication method:

Risk Assessment: Evaluate the level of security risk associated with the data or systems being protected. Higher risk scenarios may warrant more stringent authentication methods, such as biometric or MFA. User Experience: Consider the impact on the user. While security is paramount, overly cumbersome authentication processes can lead to poor compliance and user frustration. Cost and Infrastructure: Assess the financial and infrastructure implications of deploying new authentication technologies. While advanced methods like biometric authentication offer enhanced security, they also come with higher implementation costs. Compliance Requirements: Ensure that the chosen authentication method aligns with industry regulations and standards, which may dictate specific security measures.

Defending against increasingly sophisticated cyber threats requires understanding the unique advantages and limitations of available authentication methods, and selecting the controls that are best aligned with organizational needs and user expectations. Using the methods described above can help define an authentication strategy that ensures security measures remain robust, responsive, and user-friendly.

The post Four Ways to Align Authentication with Business Needs appeared first on 1Kosmos.


Ocean Protocol

Ocean Nodes Incentives: A Detailed Breakdown

This blog post will provide a detailed breakdown of the incentive mechanism for Ocean Nodes, including who is eligible and when rewards will be distributed. With the recent launch of Ocean Nodes, a peer-to-peer (P2P) network that allows users to run all components of the Ocean Protocol stack — such as Ocean Provider, Aquarius, and Compute-to-Data — within a single component, we are excited to unv
This blog post will provide a detailed breakdown of the incentive mechanism for Ocean Nodes, including who is eligible and when rewards will be distributed.

With the recent launch of Ocean Nodes, a peer-to-peer (P2P) network that allows users to run all components of the Ocean Protocol stack — such as Ocean Provider, Aquarius, and Compute-to-Data — within a single component, we are excited to unveil the Ocean Nodes Boosters (ONBs), the Soulbound Tokens used in the incentive system.

This article dives into the details of how the incentives work, including the eligibility criteria and the timeline for reward distribution.

Understanding Ocean Nodes Boosters (ONBs)

The Ocean Nodes Boosters (ONBs) are non-transferrable ERC721 tokens, best known as Soulbound Tokens, that work as a key incentive mechanism to measure and maintain a high degree of Nodes availability in the network. These tokens provide reward multipliers based on Node uptime, incentivizing reliable participation in the network.

Here’s how the Ocean Nodes Boosters (ONBs) are structured across different launch phases:

Phase 1 ONB (ONB1): 1.5x reward multiplier Phase 2 ONB (ONB2): 1.3x reward multiplier Phase 3 ONB (ONB3): 1.2x reward multiplier

The reward multipliers increase depending on the combination of ONBs:

ONB1 + ONB2: 1.8x reward multiplier ONB1 + ONB3: 1.7x reward multiplier ONB1 + ONB2 + ONB3: Maximum 2x reward multiplier

The maximum reward multiplier a node can achieve is 2x if it holds all three ONBs, providing a powerful incentive to participate across all phases of the Ocean Nodes launch.

Uptime & Rewards Calculation

The Ocean Nodes incentive structure is designed to reward nodes that maintain high availability and uptime. The Ocean Protocol Foundation will allocate 5,000 $FET each week to nodes that demonstrate a high level of uptime. Rewards are calculated using the following formula:

R0 = Xt * U0 / Ut

Where:

R0 = Total Rewards earned

Xt = Total Rewards available

U0 = Node Uptime in seconds

Ut = Total Uptime per week, in seconds

Note: The Ocean Protocol Foundation nodes are excluded from these reward calculations, ensuring a fair distribution of incentives to independent participants.

Now, let’s go through an example, to illustrate how this works. We will look at a scenario involving four nodes:

Node A: 10 sec uptime in Epoch X, holding ONB1 (1.5x reward multiplier) in their wallet Node B: 20 sec uptime in Epoch X, no ONB Node C: 10 sec uptime in Epoch X, holding ONB1+ONB2+ONB3 Node D: run by Ocean Protocol Foundation, therefore excluded from rewards

A Node’s adjusted uptime is their uptime*multiplier. So using the scenario above:

Node A’s adjusted uptime = 10 seconds * 1.5 = 15 seconds Node B’s adjusted uptime = 20 seconds (no multiplier) Node C’s adjusted uptime = 10 seconds * 2 = 20 seconds Node D is excluded from rewards Total Uptime = 15 seconds (Node A) + 20 seconds (Node B) + 20 seconds (Node C) = 55 seconds Total rewards for the week = 5,000 $FET
 — — — — — — — — — — — — — — — — — — – Node A’s share = 15/55 ≈ 27.27% ≈ 1,364 $FET Node B’s share = 20/55 ≈ 36.36% ≈ 1,818 $FET Node C’s share = 20/55 ≈ 36.36% ≈ 1,818 $FET Node D is excluded from rewards = 0 $FET Eligibility for Incentives

To be eligible for incentives, nodes must meet specific criteria to ensure only active and publicly accessible nodes are rewarded. The following requirements must be met:

Public Accessibility: Nodes must have a public IP address API and P2P Ports: Nodes must expose both HTTP API and P2P ports to facilitate seamless communication within the network

Users can verify Nodes eligibility by connecting to the Ocean Nodes dashboard and checking for a green status indicator next to their IP address.

Steps to Install the Node and Be Eligible for Rewards

To help you get started and ensure your node is eligible for rewards, follow these steps:

Find your public IP: You’ll need this for the configuration. You can easily find it by googling “my IP” Run the Quickstart Guide: If you’ve already deployed a node, we recommend either redeploying with the guide or ensuring that your environment variables are correct and you’re running the latest version Get your Node ID: After starting the node, you can retrieve the ID from the console Expose Your Node to the Internet: From a different device, check if your node is accessible by running
telnet {your ip} {P2P_ipV4BindTcpPort}

2. To forward the node port, please follow the instructions provided by your router manufacturer — ex: Asus, TpLink, Huawei, Mercusys etc.

Verify eligibility on the Ocean Node Dashboard: Check https://nodes.oceanprotocol.com/ and search for your peerID to ensure your node is correctly configured. Considerations

As Ocean Nodes are currently in an alpha stage, please remember to:

Regularly update your deployment to maximize uptime. Account for potential issues such as node bugs*, internet disruptions, and more when measuring uptime. *Report bugs in our dedicated Discord channel so we can address them as soon as possible. When reporting, please include useful information such as the environment variables (excluding private keys), hardware specifications, and relevant logs. Please remember NOT to share your private key with anybody. Note: The current uptime may not be accurate as we’ve been testing and the monitoring system has been off multiple times. The uptime will reset on Thursday, August 29, at 00:00 UTC. Reward Distribution & Timing

Rewards for node operators are calculated on a weekly basis, using Epochs to track uptime and performance.

Epoch Timing: Each epoch begins on Thursday at 00:00 UTC Reward Distribution: While rewards are calculated weekly, the distribution may occur a few days or weeks after the epoch ends. This delay is intended to optimize for gas fees and ensure efficient transactions; however, there is a possibility that rewards could be distributed on the same day the epoch ends, depending on network conditions. Conclusion

Ocean Nodes represent a significant step forward for decentralized AI development and data sharing. The incentive structure, highlighted by the introduction of the Ocean Nodes Boosters (ONBs) ensures that active and reliable nodes are rewarded proportionally, towards a healthy and sustainable network.

To start running your node today access the Ocean Nodes README, and follow the Quickstart guide available in the main repository for detailed instructions on deployment.

By becoming part of the Ocean Nodes now, you’re contributing to the evolution of decentralized AI and also positioning yourself to benefit from the growing opportunities within the Ocean Protocol ecosystem.

Stay tuned for more updates by following us on X and joining the discussion in our Discord Server.

About Ocean Protocol

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Ocean Protocol is a founding member of the ASI Alliance.

Follow Ocean on Twitter or Telegram to keep up to date, and Predictoor’s Twitter for its news. Chat directly with the Ocean community on Discord. Track Ocean’s tech progress directly on GitHub.

Ocean Nodes Incentives: A Detailed Breakdown was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Verida

Verida Technical Litepaper: Self-Sovereign Confidential Compute Network to Secure Private AI (Part…

Verida Technical Litepaper: Self-Sovereign Confidential Compute Network to Secure Private AI (Part 1) This is the first of three posts over the next three weeks to release the “Verida Technical Litepaper: Self-Sovereign Confidential Compute Network to Secure Private AI” and was originally published by Chris Were, CEO and co-founder at Verida. Introduction Verida’s mission has always b
Verida Technical Litepaper: Self-Sovereign Confidential Compute Network to Secure Private AI (Part 1)

This is the first of three posts over the next three weeks to release the “Verida Technical Litepaper: Self-Sovereign Confidential Compute Network to Secure Private AI” and was originally published by Chris Were, CEO and co-founder at Verida.

Introduction

Verida’s mission has always been clear: empower individuals to own and control their data. Now, we’re taking it further.

This Technical Litepaper presents a high-level outline of how the Verida Network is growing beyond decentralized, privacy preserving databases, to support decentralized, privacy-preserving compute optimized for handling private data. There are numerous privacy issues currently facing AI that web3 and decentralized physical infrastructure networks can help solve. From Verida’s perspective, this represents an expansion of our mission from allowing individuals to control their data to introducing new and powerful ways for users to benefit from their data.

Current AI Data Challenges

We are running out of high-quality data to train LLMs

Public internet data has been scraped and indexed by AI models, with researchers estimating that by 2026, we will exhaust high-quality text data for training LLMs. Next, we need to access private data, but it’s hard and expensive to access.

Private enterprise and personal AI agents need to access private data

There is a lot of excitement around the next phase of AI beyond chat prompts. Digital twins or personal AI agents that know everything about us and support every aspect of our professional and personal lives. However, to make this a reality AI models need access to private, real time context-level user data to deliver more powerful insights and a truly personalized experience.

Existing AI platforms are not private

The mainstream infrastructure providers powering the current generation of AI products have full access to prompts and training data, putting sensitive information at risk.

AI trust and transparency is a challenge

Regulation is coming to AI and it will become essential that AI models can prove the training data was high quality, ethically sourced. This is critical to reduce bias, misuse and improve safety in AI.

Data creators aren’t being rewarded

User-owned data is a critical and valuable resource for AI and those who create the data should benefit from its use. Reddit recently sold user data for $200M, while other organizations have reached similar agreements. Meta is training its AI models on user data from some countries, but excluding European users due to GDPR preventing them from doing so without user consent.

Verida’s Privacy Preserving Infrastructure

Verida has already developed the leading private decentralized database storage infrastructure (see Verida Whitepaper) which provides a solid foundation to address the current AI data challenges.

Expanding the Verida network to support privacy-preserving compute enables private, encrypted data to be integrated with leading AI models, ensuring end-to-end privacy, safeguarding data from model owners. This will unlock a new era of hyper-personal and safe AI experiences.

AI services such as ChatGPT have full access to any information users supply and have already been known to leak sensitive data. By enabling model owners access to private data, there is increased risks of data breaches, imperiling privacy, and ultimately limiting AI use cases.

There are three key problems Verida is solving to support secure private AI:

Data Access: Enabling users to extract and store their private data from third party platforms for use with emerging AI prompts and agents. Private Storage and Sharing: Providing secure infrastructure allowing user data to be discoverable, searchable and accessible with user-consent to third party AI platforms operating within verifiable confidential compute environments. Private Compute: Provide a verifiable, confidential compute infrastructure enabling agentic AI computation to securely occur on sensitive user data.

Supporting the above tasks, Verida is building a “Private Data Bridge”, allowing users to reclaim their data and use it within a new cohort of personalized AI applications. Users can pull their private data from platforms such as Google, Slack, Notion, email providers, LinkedIn, Amazon, Strava, and much more. This data is encrypted and stored in a user-controlled private data Vault on the Verida network.

It’s important to note that Verida is not building infrastructure for decentralized AI model training, or distributed AI inference. Rather, Verida’s focus is on providing a high performance, secure, trusted and verifiable infrastructure suitable for managing private data appropriate for AI use cases.

We have relationships with third parties that are building; private AI agents, AI data marketplaces and other privacy-centric AI use cases.

Comparing Current AI Solutions

AI solutions can be deployed primarily through two methods: cloud-based/hosted services or on local machines.

Cloud-based AI services, while convenient and scalable, expose sensitive user data to potential risks, as data processing occurs on external servers and may be accessible to third parties.

In contrast, local AI environments offer enhanced security, ensuring that user data remains isolated and inaccessible to other applications or external entities. However, local environments come with significant limitations, including the need for technical expertise that is not available to the majority of users. Moreover, these environments often face performance challenges; for instance, running large language models (LLMs) on standard consumer hardware is typically impractical due to the high computational demands.

Verida’s Confidential Storage and Compute infrastructure offers alternatives to these approaches.

Comparison of different AI infrastructure options

Apple has recently announced Private Cloud Compute that provides a hybrid local + secure cloud approach. AI processing occurs on a local device (ie: mobile phone) by default, then when additional processing power is required, the request is offloaded to Apple’s servers that are operating within a trusted execution environment. This is an impressive offering that is focused on solving important security concerns relating to user data and AI. However, it is centralized, only available to Apple devices and puts significant trust in Apple as they control both the hardware and attestation keys.

Self-Sovereign AI Interaction Model

Let’s look at what an ideal model of confidential AI architecture looks like. This is an interaction model of how a basic “Self-Sovereign AI” chat interface, using a RAG-style approach, would operate in an end-to-end confidential manner.

Self-Sovereign AI Interaction Model

The End User Application in this example will be a “Chat Prompt” application. A user enters a prompt (i.e., “Summarize the conversation I had with my mates about the upcoming golf trip”).

A Private AI API endpoint (AI Prompt) receives the chat prompt and breaks down the request. It sends a prompt to the LLM, converting the original prompt into a series of search queries. The LLM could be an open source or proprietary model. Due to the confidential nature of the secure enclave, proprietary models could be deployed without risk of IP theft by the model owner.

The search queries are sent to the User Data API which has access to data previously obtained via Verida’s Private Data Bridge. This data includes emails, chat message histories and much more.

The Private AI API collates the search query results and sends the relevant responses and original prompt to the LLM to produce a final result that is returned to the user.

Verida is currently developing a “showcase” AI agent that implements this architecture and can provide a starting point for other projects to build their own confidential private AI products.

Continue reading Part 2.

Verida Technical Litepaper: Self-Sovereign Confidential Compute Network to Secure Private AI (Part… was originally published in Verida on Medium, where people are continuing the conversation by highlighting and responding to this story.


Indicio

What is DIDComm? (With Pictures!)

The post What is DIDComm? (With Pictures!) appeared first on Indicio.

By Sam Curren

Trusted communication continues to be the internet’s critical missing component, even as our reliance on digital services like healthcare, mobile banking, and payments increases and where seamless, secure interactions are vital. While there are some applications and protocols designed to try to foster secure communication, they are narrow in scope and fail to broadly support the diverse types of communication we need. This shortcoming stems from their fragmented abilities and limited scope. They focus on specific areas of communication, such as simplifying complex login procedures or various security schemes, but they fail to allow the kinds of communication necessary for a variety of online activities. 

The result is an incomplete tech landscape, where direct, secure communication is not fully achievable, where users are left with a fragmented landscape of partial solutions, and a successful zero trust security practice continues to challenge even the most well-resourced organizations. Without holistic and user-friendly solutions that address these shortcomings, true, trusted, general communication on the internet remains an unfulfilled promise.

This is why more industries than ever are turning to decentralized identity and verifiable credentials to solve these missing pieces and why we’ve built DIDComm into the heart of Indicio Proven. While many other standards and protocols are developing to support the simple exchange of information using verifiable credentials, the vast majority of customer use cases that Indicio supports require both sides to authenticate, communicate, and build using the existing infrastructure they’ve already invested in. You can see deployments in travel, financial services, government and more.

The success comes from DIDComm

DIDComm, or DID Communication, is a protocol designed to enable secure and private communication between parties by using decentralized identifiers (DIDs). Unlike traditional methods for trusted connections, DIDComm provides a robust framework for mutual authentication and trusted communication, addressing the gaps in current technologies. DIDComm leverages Verifiable Credentials to add trust to long-term digital relationships. By integrating DIDComm into an existing tech stack or ecosystem, both end users and businesses benefit from enhanced security, privacy, and trust. 

For end users, DIDComm ensures that the communications they have with each other are not only encrypted but also authenticated. This means they are secure from malicious actors impersonating them but also impersonating the business or other entity they are communicating with. This benefits businesses and governments by facilitating secure and seamless interactions with customers, partners, and citizens, while reducing the risk of impersonation, mitigating fraud, and enhancing trust. 

The decentralized nature of DIDComm also means there is no reliance on a central authority, organization, or company to manage the process or facilitate identity (anyone can use software to create a DID with an endpoint for DIDComm and cryptographically prove they control their DID). This increases resilience and reduces security vulnerabilities with a zero trust enhanced architecture. 

Incorporating DIDComm into your digital identity strategy is a game-changing move as it means that all parties in an identity ecosystem or communication channel can confidently authenticate each other and exchange information securely. This removes a fundamental weakness in current identity verification and communication.

The value of DIDComm lies it its ability to enable:

Secure communication: Traditional forms of digital communication, such as email, are often not encrypted at all, likely passing in plain text, meaning anyone who can observe network traffic can read it. And while email can be helpful as it serves both as an identifier and a method to communicate, the lack of secure, easy-to-use encryption creates security vulnerabilities when it comes to relaying sensitive information, such as health and financial records. While there are ways to encrypt email, they are typically clunky and not user-friendly. DIDComm solves this security problem in a way that is user-friendly, offering seamless key management and encryption.

DIDComm also fulfills the need to communicate securely while authenticating the identities of the participants. It requires an identifier that is verifiable and adds the ability to communicate both securely and privately.

Direct connection: DIDComm changes the nature of how we interact online, allowing us to regain the ability to communicate directly with others on the internet without dependence on third party platforms. This direct connection restores the security and trust that were lost with the reliance on intermediaries, such as email clients or social media platforms.

Extensibility: Much like the internet itself, DIDComm is highly extensible. It can be enhanced with capabilities through the design of new protocols. This extensibility allows DIDComm to interact with various things, people, and systems, making it incredibly useful. And where APIs are convenient ways to build complex communication protocols into online interactions, they require constant connection between their source and the end user making them difficult to update and manage, especially if connectivity is lost. DIDComm is optimized for, and extremely compatible with, commonly used devices such as mobile phones and tablets.

Mutual authentication: Authentication from one side of a connection, which many traditional digital identity tools are capable of doing, is not enough. Both parties must be able to verify each other’s identities for there to be truly secure communication. But mutual authentication is rarely straightforward and often requires cumbersome setup and maintenance, which can deter widespread adoption. Applications and protocols also overlook the need for comprehensive privacy measures, failing to protect metadata or ensure data integrity across all layers of communication.

DIDComm enables mutual authentication, providing assurance to both parties in a communication channel that they are who they claim to be. While many existing systems authenticate one side of a connection, such as just identifying the customer or end user, it is equally important that the other side is also authenticated. Think about the phishing scams where fraudsters pretend to be your bank or other service in order for you to share your login information with a bogus website or login portal. DIDComm eliminates this. You’ll always know you are interacting with your bank.

Protocol interoperability: DIDComm can also be used alongside more focused protocols, such as OpenID4VC (which is limited to only the exchange of verifiable credentials and doesn’t provide a generalized method of communication). DIDComm goes beyond single purpose protocols and combines the power of verifiable credentials with extensible communication. The trust gained by the exchange of verifiable credentials can then be used to coordinate powerful interactions, secure messaging, and more.

Until DIDComm, the internet has been missing an easy, comprehensive solution for secure and trusted communication. Applications and protocols built on DIDComm support use cases ranging from communicating with government border authorities for the preclearance of international travelers to businesses and financial institutions offering customized products to customers.

To get involved with DIDComm, individuals and organizations can participate in the work of the Decentralized Identity Foundation (DIF), contributing to the development of standards and protocols, collaborate with industry leaders, stay informed about the latest advancements, and help shape the future of decentralized identity and secure communication.

Indicio has extensive experience with DIDComm, and we’d love to help you integrate Indicio Proven into your existing systems. Reach out to Indicio and learn how DIDComm can empower your organization.

The post What is DIDComm? (With Pictures!) appeared first on Indicio.


Aergo

Aergo V4 Update: New Timeline and Key Considerations

As we continue to refine and enhance the Aergo network, we want to update our community on the revised timeline for the upcoming V4 hard fork. This adjustment allows us to ensure full compatibility with our current enterprise customers and their nodes and address a few minor issues identified during testing. Why the Change? Enterprise Node/Network Compatibility: Our enterprise custome

As we continue to refine and enhance the Aergo network, we want to update our community on the revised timeline for the upcoming V4 hard fork. This adjustment allows us to ensure full compatibility with our current enterprise customers and their nodes and address a few minor issues identified during testing.

Why the Change? Enterprise Node/Network Compatibility: Our enterprise customers play a crucial role in the Aergo ecosystem, and it’s vital that their nodes integrate seamlessly with the upcoming hard fork. We’re taking additional time to thoroughly test and align the upgrade with their specific requirements to ensure this. Minor Issues Identified: During the final stages of testing, a few minor issues were identified that need to be addressed. While these issues do not impact the hard fork's core functionality, resolving them now will prevent potential disruptions and ensure a smooth transition for all participants.

So far, we’ve completed approximately 95% of our Aergo V4 test scripts, but a few tests are still pending to ensure everything functions as expected. This means we will not meet our previously communicated mainnet hard fork target date of the end of August.

New Timeline Current Phase: Ongoing Testing and Final Optimizations with 95% of the Work Completed Testnet Launch: Mid-September Mainnet Hard Fork: End of September

We will continue working with key participants, including node operators, exchanges, and other partners, to ensure all necessary preparations are completed ahead of the new timeline. This includes additional testing, further optimization, and ensuring the community is fully prepared for the transition.

While delays can be challenging, this additional time is essential to ensure the hard fork meets the high standards our clients and community expect. We appreciate your understanding and continued support as we work to deliver a more robust, more reliable Aergo network.

Stay tuned for more updates!

Aergo V4 Update: New Timeline and Key Considerations was originally published in Aergo blog on Medium, where people are continuing the conversation by highlighting and responding to this story.


PingTalk

What Is Password Spraying and How Do You Prevent It?

Learn about password spraying attacks, how they work, and how to defend your organization against them with our comprehensive guide.

Password spraying is an account takeover (ATO) cyberattack where attackers use a single common password or a handful of common passwords to try to access many accounts. This method spreads out login attempts across numerous accounts, making it harder to detect and block.

 

By using password spraying, attackers can effectively take over user accounts, leading to unauthorized access and potential exploitation of sensitive information.

 

These attacks are increasingly common and can lead to data breaches, financial loss, and damage to your organization's reputation. Understanding password spraying and how to defend against it is key to maintaining security.

Monday, 19. August 2024

Microsoft Entra (Azure AD) Blog

Face Check is now generally available

Earlier this year we announced the public preview of Face Check with Microsoft Entra Verified ID – a privacy-respecting facial matching feature for high-assurance identity verifications and the first premium capability of Microsoft Entra Verified ID. Today I’m excited to announce that Face Check with Microsoft Entra Verified ID is generally available. It is offered both by itself and as part of th

Earlier this year we announced the public preview of Face Check with Microsoft Entra Verified ID – a privacy-respecting facial matching feature for high-assurance identity verifications and the first premium capability of Microsoft Entra Verified ID. Today I’m excited to announce that Face Check with Microsoft Entra Verified ID is generally available. It is offered both by itself and as part of the Microsoft Entra Suite, a complete identity solution that delivers Zero Trust access by combining network access, identity protection, governance, and identity verification capabilities.

 

 

  Unlocking high-assurance verifications at scale


There’s a growing risk of impersonation and account takeover. Bad actors use insecure credentials in 66% of attack paths. For example, impersonators may use a compromised password to fraudulently log in to a system. With advancements in generative AI, complex impersonation tactics such as deepfakes are growing as well. Many organizations regularly onboard new employees remotely and offer a remote help desk. Without strong identity verification, how can organizations know who is on the other side of these digital interactions? Impersonators can easily bypass common verification methods such as counting bicycles on a CAPTCHA or asking which street you grew up on. As fraud skyrockets for businesses and consumers, and impersonation tactics have become increasingly complex, identity verification has never been more important.


Microsoft Entra Verified ID is based on open standards, enabling organizations to verify the widest variety of credentials using a simple API. Verified ID integrates with some of the leading verification partners to verify identity attributes for individuals (for example, a driver’s license and a liveness match) across 192 countries. Today, hundreds of organizations rely on Verified ID to remotely onboard new users and reduce fraud when providing self-service recovery. For example, using Verified ID, Skype has reduced fraudulent cases of registering Skype Phone Numbers in Japan by 90%.

 

Face Check with Microsoft Entra Verified ID


Powered by Azure AI services, Face Check adds a critical layer of trust by matching a user’s real-time selfie and the photo on their Verified ID, which is usually from a trusted source such as a passport or driver’s license. By sharing only match results and not any sensitive identity data, Face Check strengthens an organization’s identity verification while protecting user privacy. It can detect and reject various spoofing techniques, including deepfakes, to fully protect your users’ identities.


BEMO, a security solution provider for SMBs, integrated Face Check into its help desk to increase verification accuracy, reduce verification time, and lower costs. The company used Face Check with Microsoft Entra Verified ID to protect its most sensitive accounts which belong to C-level executives and IT administrators.


Face Check not only helps BEMO improve customer security and strengthen user data privacy, but it also created a 90% efficiency improvement in addressing customer issues. BEMO’s help desk now completes a manual identity verification in 30 minutes, down from 5.5 hours before implementing Face Check.


“Security is always great when you apply it in layers, and this verification is an additional layer that we’ll be able to provide to our customers. It’s one more way we can help them feel secure.” – Jose Castelan, Support and Managed Services Team Lead, BEMO

 

Check out the video below to learn more about how your organization can use Face Check with Microsoft Entra Verified ID:

 

 

  Jumpstart with partners


Our partners specialize in implementing Face Check with Microsoft Entra Verified ID in specific use cases or verifying certain identity attributes such as employment status, education, or government-issued IDs (with partners like LexisNexis® Risk Solutions, Au10tix, and IDEMIA). These partners extend Verified ID’s capabilities to provide a variety of verification solutions that will work for your business’s specific needs.


Explore our partner gallery to learn more about our partners and how they can help you get started with Verified ID.

 

Start using Face Check with Microsoft Entra Verified ID


Face Check is a premium feature of Verified ID. After you set up your Verified ID tenant, there are two purchase options to enable Face Check and start verifying:


1. Begin the Entra Suite free trial, which includes 8 Face Check verifications per user per month.
2. Enable Face Check within Verified ID and pay $0.25 per verification.

 

Visit the Microsoft Entra pricing page for more details.

 

What’s Next?


Learn more about how Microsoft Entra Verified ID works and how organizations are using it today, and join us for the Microsoft Entra Suite Tech Accelerator on August 14 to learn about the latest identity management and end-to-end security innovations.

 

Ankur Patel, Head of Product for Microsoft Entra Verified ID

 

 

Read more on this topic 

Watch the Zero Trust spotlight Learn about the Microsoft Entra Suite Learn more about Face Check with Microsoft Entra Verified ID in the FAQ

 

Learn more about Microsoft Entra

Prevent identity attacks, ensure least privilege access, unify access controls, and improve the experience for users with comprehensive identity and network access solutions across on-premises and clouds.

Microsoft Entra News and Insights | Microsoft Security Blog⁠Microsoft Entra blog | Tech CommunityMicrosoft Entra documentation | Microsoft Learn

liminal (was OWI)

2024 Liminal Landscape: Your Blueprint for Market Leadership

The post 2024 Liminal Landscape: Your Blueprint for Market Leadership appeared first on Liminal.co.

Ocean Protocol

Predictoor Benchmarking: The Effects of Balancing on Calibrated Linear Classifiers

Comparing Calibrated Lasso (L1) vs Ridge Regression (L2) vs ElasticNet (L1-L2) Classifiers With and Without Balancing Summary This post describes benchmarks of Ocean Predictoor simulations across the Predictoor models: ClassifLinearLasso, ClassifLinearLasso_Balanced, ClassifLinearRidge, ClassifLinearRidge_Balanced, ClassifLinearElasticNet, and ClassifLinearElasticNet_Balanced. The benchmarks com
Comparing Calibrated Lasso (L1) vs Ridge Regression (L2) vs ElasticNet (L1-L2) Classifiers With and Without Balancing Summary

This post describes benchmarks of Ocean Predictoor simulations across the Predictoor models: ClassifLinearLasso, ClassifLinearLasso_Balanced, ClassifLinearRidge, ClassifLinearRidge_Balanced, ClassifLinearElasticNet, and ClassifLinearElasticNet_Balanced. The benchmarks compare the effects of model class balancing on Predictoor profit (accuracy) and trader profit. Each implementation is compared with three different calibrations.

It then proceeds to do a walk-through of each of the benchmark plots for predictoor/trader profit, and comparisons of the models & their calibrations.

1. Introduction 1.1 What is Ocean Predictoor?

For information about Ocean Predictoor, please refer to the Predictoor Series blogpost that catalogs all the blog posts, articles, and talks related to Predictoor. Learn about ML classification, L1 & L2 regularization, calibration, and Predictoor’s simulation tool (“pdr sim”) and (“pdr multisim”) in the Regularized Linear Classifiers With Calibration blogpost.

1.2 What is ML Balancing?

ML balancing are techniques used to adjust the distribution of classes in a dataset to address biasing in the model’s performance for classification problems. Balancing can be achieved through various methods such as undersampling the majority class, oversampling the minority class, or synthetically generating data for underrepresented classes using algorithms like SMOTE (Synthetic Minority Over-sampling Technique). These adjustments help to predict each class equally despite their differing sample sizes.

1.3 Understanding Balancing Implementation

The models in this benchmarking blogpost are implemented with Python scikit-learn’s LogisticRegression() function with the class_weight = “balanced” parameter. The parameter’s balancing formula is detailed in the Appendix.

1.4 Benchmarks Outline

We run benchmarks on the approaches:

ClassifLinearLasso — Implemented with scikit-learn’s LogisticRegression() and L1 Regularization. ClassifLinearLasso_Balanced — Implemented with scikit-learn’s LogisticRegression(), L1 regularization, and class_weight=“balanced”. ClassifLinearRidge — Implemented with scikit-learn’s LogisticRegression() and L2 Regularization. ClassifLinearRidge_Balanced — Implemented with scikit-learn’s LogisticRegression(), L2 regularization, and class_weight=“balanced”. ClassifLinearElasticNet — Implemented with scikit-learn’s LogisticRegression() and L1 & L2 Regularization. ClassifLinearElasticNet_Balanced — Implemented with scikit-learn’s LogisticRegression(), L1 & L2 regularization, and class_weight=“balanced”.

The models are also benchmarked with the same three calibration approaches, None, Isotonic, and Sigmoid, as in the Linear SVM Classifier with Calibration blog post.

1.5 Experimental Setup

The same testing parameters as in the previous blog post, Different Optimizations for Linear SVC, were used in this experimental setup’s my_ppss.yaml file.

2. ClassifLinearLasso With and Without Balancing

Ocean Predictoor’s ClassifLinearLasso and ClassifLinearLasso_Balanced models are implemented with Python scikit-learn’s LogisticRegression() with an L1 regularization & liblinear solver. More information about the liblinear solver is well documented in the previous Different Optimizations for Linear SVC blog post.

2.1.1 Predictoor Profitability

Balancing did not improve the ClassifLinearLasso model’s Predictoor profits. The maximum Predictoor profit achieved by ClassifLinearLasso was 6224.46 OCEAN using a Sigmoid calibration with 1000 training samples of BTC-USDT and an autoregressive_n = 2. However, the ClassifLinearLasso_Balanced model only profited 5226.80 OCEAN. They both used the same tunings to achieve their max Predictoor profits. Adding ETH-USDT data to the training set did not improve returns.

2.1.2 Trader Profitability

Balancing did improve the max trader profit, and the ClassifLinearLasso_Balanced model beat all the other models benchmarked in this blog post. The model gained $351.73 USD using None calibration trained on 1000 BTC-USDT data and with an autoregressive_n = 2. The unbalanced ClassifLinearLasso model’s best trader profit was $324.93 USD using the same tunings as the balanced model’s to achieve max trader profit. As in the Predictoor profit benchmark, adding ETH-USDT data did not improve trader profit returns.

3. ClassifLinearRidge With and Without Balancing

Ocean Predictoor models ClassifLinearRidge and ClassifLinearRidge_Balanced are implemented with Python scikit-learn’s LogisticRegression() with an L2 regularization & LBFGS solver. More information about the LBFGS solver is in the Appendix.

3.1 ClassifLinearRidge & ClassifLinearRidge_Balanced Benchmarks 3.1.1 Predictoor Profitability

Balancing did not improve the ClassifLinearRidge model’s max Predictoor profit. The max Predictoor profit achieved by ClassifLinearRidge was 6051.65 OCEAN gained by using Sigmoid calibration with 1000 training samples of BTC-USDT and an autoregressive_n = 2. The ClassifLinearRidge_Balanced model by comparison, only gained 4313.23 OCEAN and used None calibration with 1000 BTC-USDT training samples and autoregressive_n = 2. Neither profited more from the addition of ETH-USDT data to the training dataset.

3.1.2 Trader Profitability

Balancing did not significantly improve trader profit either. The ClassifLinearRidge model’s max trader profit was $342.54 USD with Isotonic calibration, 1000 training samples of BTC-USDT data, and autoregressive_n = 2. Whereas the top trader profit by the ClassifLinearRidge_Balanced model was $304.62 USD with None calibration, trained on 1000 samples of BTC-USDT & ETH-USDT data, and autoregressive_n = 2.

4. ClassifLinearElasticNet With and Without Balancing

Ocean Predictoor models ClassifLinearElasticNet and ClassifLinearElasticNet_Balanced are implemented with Python scikit-learn’s LogisticRegression() with L1 & L2 regularization & SAGA solver. More information about the SAGA solver is in the Appendix.

4.1 ClassifLinearElasticNet & ClassifLinearElasticNet_Balanced Benchmarks 4.1.1 Predictoor Profitability

Balancing did not improve the Predictoor profit of the ClassifLinearElasticNet model. The max Predictoor profit was 5932.50 OCEAN gained by the unbalanced ClassifLinearElasticNet model with Sigmoid calibration, 1000 samples of BTC-USDT training data, and autoregressive_n = 2. The max Predictoor profit gained by the ClassifLinearElasticNet_Balanced model was 4369.96 OCEAN using None calibration, 1000 training samples of BTC-USDT, and autoregressive_n = 2. Generally, adding ETH-USDT data to the training dataset did not improve Predictoor profitability.

4.1.2 Trader Profitability

Balancing did not improve the maximum trader profit achieved by the ClassifLinearElasticNet model either. The unbalanced model gained $330.53 USD with Isotonic calibration & 1000 training samples of BTC-USDT data with autoregressive_n = 2. Meanwhile the ClassifLinearElasticNet_Balanced model dropped in profitability. The balanced model gained $295.91 USD using None calibration and the same 1000 samples BTC-USDT training set with autoregressive_n = 2.

5. Comparison Analysis 5.1 Highest Predictoor Profits

The highest Predictoor profit of all the benchmarks was 6224.46 OCEAN achieved with an unbalanced ClassifLinearLasso model using a Sigmoid calibration, 1000 training samples of BTC-USDT data & an autoregressive_n = 2. The addition of ETH-USDT data to the training set weighed down Predictoor profits; the max Predictoor profit using BTC-USDT & ETH-USDT training data was 5451.79 OCEAN and was generated by the same ClassifLinearLasso model & tunings. Balancing the models decreased Predictoor profit even further. The max Predictoor profit by a balanced model was 5226.80 OCEAN which was gained by the ClassifLinearLasso_Balanced model using the same tunings as for the unbalanced max profits.

5.2 Highest Trader Profits

The maximum trader profit of all the benchmarks was $351.73 USD and was achieved with the ClassifLinearLasso_Balanced model. The balanced model used None calibration, trained on 1000 BTC-USDT data samples, and had an autoregressive_n = 2. The most profitable unbalanced models all used Isotonic calibration instead. The introduction of ETH-USDT to the training set generally decreased the trader profits.

6. Conclusion

Balancing did not improve Predictoor profits, but it did improve trader profit. The maximum trader profit was $351.73 USD in 5000 iterations and was achieved with the ClassifLinearLasso_Balanced model, beating all the other model benchmarks. The balanced model used None calibration, trained on 1000 BTC-USDT data samples, and had an autoregressive_n = 2. The highest Predictoor profit of all the benchmarks was 6224.46 OCEAN and was gained by an unbalanced ClassifLinearLasso model using a Sigmoid calibration, 1000 training samples of BTC-USDT data & an autoregressive_n = 2.

6.1 Patterns in Model Tuning

The benchmarks consistently showed that using a training set of 1000 samples solely from BTC-USDT data, without incorporating ETH-USDT data, coupled with an autoregressive lookback period of 2, yielded the highest profits across various model configurations, regardless of whether they were balanced or unbalanced. This specific setup likely maximized profitability by focusing on the more predictable patterns of Bitcoin transactions and efficiently leveraging short-term historical data to inform trading decisions. However, this approach may cause overfitting when predicting other market conditions or cryptocurrencies since the model’s strong performance on this narrowly defined dataset and lookback period may not generalize well.

6.2 Maximizing Predictoor Profitability

In all unbalanced model benchmarks, the maximum Predictoor profits were gained using a Sigmoid calibration. In all the balanced model benchmarks, the maximum Predictoor profits were generated using None calibration.

6.3 Maximizing Trader Profitability

An interesting pattern emerged about the trader profits: either balanced models using None calibration or unbalanced models using Isotonic calibration yielded the top trader profits. These configurations appeared to minimize losses / maximize profits as in a confidence-based trading system. However, combining both balancing and Isotonic calibration did not maximize the trader profits.

6.4 Balancing with None Calibration

The combination of None calibration with balancing improved the Predictoor & trader profits. Without balancing, None calibration caused all the models to perform poorly. Therefore, balancing the models appeared to inversely affect the performance of None calibration compared to the unbalanced models.

7. Appendix: Tables 7.1 ClassifLinearLasso Data Table

A highlight from the ClassifLinearLasso data table is that this data includes the maximum Predictoor profit of all the models, 6224.46 OCEAN. This max was generated with the ClassifLinearLasso model using a Sigmoid calibration with 1000 training samples of BTC-USDT and an autoregressive_n = 2. The table also shows how Isotonic calibration helped the model achieve a strong trader profit and that the inclusion of ETH-USDT data did not improve profitability.

7.2 ClassifLinearLasso_Balanced Data Table

A noteworthy data point from the ClassifLinearLasso_Balanced data table is that it includes the max trader profit of all the benchmarks. The ClassifLinearLasso_Balanced model gained $351.73 USD using None calibration training on 1000 BTC-USDT data samples and with an autoregressive_n = 2. The data table also shows that balancing decreased Predictoor profits & the inclusion of ETH-USDT data generally decreased profitability overall.

7.3 ClassifLinearRidge Data Table

The data table for the ClassifLinearRidge model shows that it achieved a max Predictoor profit of 6051.65 OCEAN by using Sigmoid calibration with 1000 training samples of BTC-USDT and an autoregressive_n = 2. This calibration was also used with the ClassifLinearLasso model to generate its max Predictoor profit. It also matches the ClassifLinearLasso data in that an Isotonic calibration improved trader profit returns. The inclusion of ETH-USDT data decreased profitability.

7.4 ClassifLinearRidge_Balanced Data Table

Balancing did not improve either the ClassifLinearRidge model’s max Predictoor profit or trader profit. The ClassifLinearRidge_Balanced model only gained a max Predictoor profit of 4313.23 OCEAN and used None calibration with 1000 BTC-USDT training samples and autoregressive_n = 2. The top trader profit by the ClassifLinearRidge_Balanced model was $304.62 USD with None calibration, trained on 1000 samples of BTC-USDT & ETH-USDT data, and autoregressive_n = 2. The addition of ETH-USDT data to the training dataset decreased profitability.

7.5 ClassifLinearElasticNet Data Table

As the ClassifLinearElasticNet model uses both L1 & L2 regularization, then it is expected that it shows similar behavior as both ClassifLinearLasso and ClassifLinearRidge — this is exactly what the data shows. The model’s max Predictoor profit & trader profit were gained under the same circumstances: a Sigmoid calibration for max Predictoor profit & Isotonic for max trader profit, each with 1000 samples of BTC-USDT training data, and autoregressive_n = 2. The data is also in agreement with the effect of ETH-USDT data weighing profits down. Max Predictoor profit was 5932.50 OCEAN and max trader profit was $330.53 USD, showing that L1 & L2 regularization decreased Predictoor profit somewhat but improved trader profit compared to ClassifLinearLasso and ClassifLinearRidge.

7.6 ClassifLinearElasticNet_Balanced Data Table

The ClassifLinearElasticNet_Balanced data table shows that balancing did not improve either the Predictoor profit or trader profit of the ClassifLinearElasticNet model. The max Predictoor profit was 4369.96 OCEAN, and the max trader profit was $295.91 USD. Like the ClassifLinearLasso_Balanced & ClassifLinearRidge_Balanced models, the ClassifLinearElasticNet_Balanced model used None calibration and 1000 samples of BTC-USDT data with autoregressive_n = 2 to achieve these maximums. Generally, adding ETH-USDT data to the training dataset did not improve profitability.

8. Appendix: Details on Model Class Balancing 8.1 About Scikit-learn’s Balancing Algorithm

The models in this blog post are implemented with Scikit-learn’s LogisticRegression() function with the class_weight= “balanced” parameter. The balancing algorithm uses a specific formula to automatically adjust the weights of the classes based on their frequencies in the input data. The formula it uses is:

This adjustment helps to treat each class equally despite their differing sample sizes. In imbalanced datasets without such adjustments, the classifier might predominantly predict the majority class, ignoring the minority classes.

8.2 About the LBFGS Solver

The LBFGS solver (Limited-memory Broyden–Fletcher–Goldfarb–Shanno algorithm) is an optimization algorithm in the family of quasi-Newton methods. It approximates the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm using a limited amount of computer memory. The LBFGS solver was chosen for the ClassifLinearRidge model due to its efficiency in handling a large number of features.

8.3 About the SAGA Solver

The SAGA (Stochastic Average Gradient Descent Algorithm) solver is a variant of stochastic gradient descent that supports both L1 and L2 regularization. It combines the sparse gradient updates of the Proximal Gradient method with a variance reduction technique that accelerates the convergence of stochastic methods. SAGA is particularly effective in ML applications with high-dimensional feature spaces, and since it also supports L1 & L2 regularization, it was chosen as a solver for the ClassifLinearElasticNet model.

About Ocean, DF and Predictoor

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord. Ocean is part of the Artificial Superintelligence Alliance.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

Predictoor Benchmarking: The Effects of Balancing on Calibrated Linear Classifiers was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Indicio

How verifiable credentials disrupt online fraud, phishing, and identity theft

The post How verifiable credentials disrupt online fraud, phishing, and identity theft appeared first on Indicio.

By Ken Ebert

Everyone’s online life begins with a user account, a login, and a password, which combined, turns into an identity. I am my email address — or social media account login. For the past twenty five years, life online has evolved by accumulating these digital identifiers. The more we have, the more we can do online. 

We don’t really own these digital identifiers: they’re lent to us on the assurance that we are who we claim to be, via the personal information we provide. This information is stored in a database along with lots of other people’s personal data so that they, too, can have a digital identifier.

This is how we identify each other on a network that was designed to manage computer identity rather than personal or organizational identity. It’s been amazingly successful at allowing billions of people to exist and interact online. Unfortunately, what it hasn’t been amazingly successful at is preventing all those people from having their identities stolen or faked.

One anecdote may be familiar: you get an email “from your bank.” Due to suspicious activity, your account has been locked and you need to log on to unlock it. You login (but not you, because you’d never be fooled by this, right?) and…  it’s not your bank. Whoever it is you’ve just given your login details to can now access your real bank account. Ninety percent of successful data breaches are a result of successful phishing.

Or maybe it doesn’t have to be this sophisticated: your password is 1,2,3,4,5 — and Malicious Actors Inc guess their way into your account. Or you reuse the same password across accounts and a data breach for one of these accounts means multiple accounts are now accessible to hackers.

And not just you. Once into a database, every account is compromised. The whole defense collapses if one access point is compromised. 

Identity fraud can also be sophisticated, such as someone using generative AI tools to create a deepfake of your biometrics or those of your boss — and you give them 25 million dollars, thinking you’re following legitimate directions.

Yes, there are security solutions like multifactor authentication, but they can only do so much, given that the underlying architecture of ‘account logins-passwords-databases’ is so hard to defend. And many people dislike the friction they add to online interaction, which is already burdened by an endless cycle of forgetting and resetting passwords. I recently joined a Teams meeting where I had to receive an email with a PIN code, experience two biometric checks, and supply a two-digit code from my authenticator app. 

A digital transformation in how we share and verify data
Here’s what verifiable credentials and decentralized identity do: They remove the underlying problem of user accounts, logins, passwords.

Instead of authenticating a user account through a login and password, a user is authenticated with a verifiable credential and cryptography. 

What is a verifiable credential? Think of it like an envelope for sealing and sharing digital information. The source of the envelope (the organization issuing the credential) can be cryptographically verified. The information in the envelope is digitally signed, which, in essence, means that any attempt to alter or tamper with the information breaks the seal and can be detected.

But this is only one of the elements in the new authentication ‘stack.’

You can accept and share a verifiable credential because the software in your digital wallet has created an address for it to be sent to. This address — a decentralized identifier or DID — is under your control and you can prove this control cryptographically when you interact with another DID. 

The combination of a DID and a verifiable credential enable you to prove that you are in control of a specific identity, and you can now attach any data to that identity by writing it to a credential.

The upshot is that people hold their data, authenticate themselves and each other cryptographically, and share data that can be trusted because we can know it hasn’t been altered (assuming that we trust the original source of the data).

This is the instantaneous magic behind seamless digital travel. A person takes their physical passport and — providing it has a chip — reads the information from the passport and converts it into a digital credential. The software also requires the person to do a liveness check with a selfie and then compares the selfie with the digital image from the passport chip. The passport data is authenticated as having come from a legitimate passport-issuing authority and the person is issued with a Digital Travel Credential (DTC) by an airline.

When a DTC is presented (touchlessly), the source of the DTC is instantly authenticated, along with the integrity of the data in the DTC. Additional biometric authentication and, of course, biometric access to the device, provide further confidence that the person presenting the DTC is the holder of a legitimate passport. 

The result is portable trust. Verifiable data can go from anywhere to everywhere — and so can you.

####

Sign up to our newsletter to stay up to date with the latest from Indicio and the decentralized identity community

The post How verifiable credentials disrupt online fraud, phishing, and identity theft appeared first on Indicio.


KuppingerCole

Oct 17, 2024: IAM meets ITDR: A Recipe for Robust Cybersecurity Posture

In today's digital landscape, identity is at the forefront of enterprise security. With a growing number of cyberattacks originating from compromised identities, organizations must adopt an identity-first security approach. This approach emphasizes proactive measures over reactive responses, crucial for minimizing risks and safeguarding sensitive information.  
In today's digital landscape, identity is at the forefront of enterprise security. With a growing number of cyberattacks originating from compromised identities, organizations must adopt an identity-first security approach. This approach emphasizes proactive measures over reactive responses, crucial for minimizing risks and safeguarding sensitive information.  

Sunday, 18. August 2024

KuppingerCole

Eight Recommendations for CISOs in 2025

In this episode of the KuppingerCole Analyst Chat, host Matthias Reinwarth is joined by Annie Bailey, Research Strategy Director at KuppingerCole Analysts, to discuss the key trends that will shape the cybersecurity landscape through 2025. The conversation explores the increasing complexity of the attack surface, the growing importance of resilience and recovery in cybersecurity strategies, and th

In this episode of the KuppingerCole Analyst Chat, host Matthias Reinwarth is joined by Annie Bailey, Research Strategy Director at KuppingerCole Analysts, to discuss the key trends that will shape the cybersecurity landscape through 2025. The conversation explores the increasing complexity of the attack surface, the growing importance of resilience and recovery in cybersecurity strategies, and the dual role of AI as both a threat and a defensive tool. In addition, the discussion covers the impact of emerging regulations, the need for advanced cybersecurity infrastructure, and how organizations can prepare for the anticipated challenges ahead.



Friday, 16. August 2024

Spruce Systems

SpruceID Joins Harvard and Microsoft Researchers for New “Personhood Credential” Proposal

Empowering humans is the best way to fight a coming wave of A.I.-powered fraud and disinformation.

Last week, Wayne Chang (CEO of SpruceID) and a broad coalition of researchers from Harvard, Microsoft, MIT, the Decentralized Identity Foundation (DIF), and other organizations released a major new proposal for fighting online disinformation and fraud. The proposed solution is a digital credential that would give internet users a powerful new tool for proving their authenticity online, while also ensuring strong privacy.

Our new paper proposes a “personhood credential,” or PHC, based on much the same cryptography-based digital credential technology that powers SpruceID’s mobile driver’s licenses in California and elsewhere. Much like SpruceID’s mDL deployments, the PHC system would reveal only the minimum necessary information about any user: in this case, simply that they are a human, not a bot or AI agent. The PHC would not disclose any identifying information, and is also designed to prevent cookie-like traceability. 

The credential would be an optional tool, primarily for specific users who want to establish a high level of credibility online while protecting their privacy, and for service providers who want to reduce fraud.

Why We Need to Prove Personhood Online

One major goal of the PHC is to distinguish authentic content on social media from deepfakes, coordinated manipulation, and other automated activity. Worries about inauthentic content online have been high for close to a decade now, but the recent advent of generative AI models, including their ability to mimic specific individuals on video, has created an even higher-risk environment for disinformation [link to fake election content piece].

Proving authenticity on the internet is difficult for technical reasons, and no truly good solution has ever emerged. That’s one reason online financial fraud and identity fraud have steadily accelerated, now costing individuals and institutions tens of billions of dollars annually. The rise of AI generated content, meanwhile, has triggered worries of a “dead internet” full of robots talking endlessly to one another.

A digital credential to demonstrate personhood could combat both disinformation and fraud, mitigate against denial-of-service attacks using automated “botnets,” and empower individuals to prove their authenticity–even if they wish to remain anonymous.

Harnessing the Power of Encryption for Online Authentication

The proposed new PHC system is fundamentally user-controlled. Among other features, that means:

1. The PHC is optional for all users.

2. It cannot reveal real-world identities.

3. Users can choose their PHC issuer.

Optionality: While any natural person could request and receive a PHC, a PHC would not (and in fact could not) be required to use the internet. Specific high-security websites or online services, such as banking portals, may choose to require the PHC as an anti-fraud measure. More generally, we expect PHC use and adoption to be driven from the bottom up by users who wish to prove their authenticity.

Anonymity and Pseudonymity: Crucially, the system is designed to prove only that the holder is a person, without transmitting any specific data, such as name, credit card, birth date, or location. This is possible because issuers confirm an applicant’s authenticity offline, then issue an anonymized PHC credential.

The digital credentials themselves are validated and secured by encrypted signatures. Related techniques are used to ensure that even these signed credentials are “unlinkable” – that is, that a user’s online activity cannot be tracked or collated.If the user desires, however, the PHC could also be used to preserve a single user identity over time.

Issuer Choice: Personhood credentials are issued and signed by an open network of PHC issuers, with measures to prevent the issuing of multiple credentials to a single person. The open issuer network ensures no issuer is able to abuse their power, for instance by limiting the uses a PHC is put to, or selecting who is eligible to receive one.

The Open PHC Issuer Network

It may seem counterintuitive that a proof of personhood credential can be trusted to a totally open network of self-selected issuers. While there are challenges and tradeoffs, we and our research coalition believe such a system strikes a balance: preserving democratic openness, while harnessing market dynamics to elevate the most trustworthy PHC issuers.

The alternative, restricting issuance only to already “trusted” issuers, would both restrict public access to the PHC credential, and create a “single point of failure” for the broader system. Potential failure conditions for a restricted-issuer system would include compromise by external hacking or internal subversion, such as the use of DMV staff privileges to gain unauthorized data access. Even worse, though, is the potential emergence of a “ministry of information” under which issuers control how PHCs are used to validate online content. 

To prevent those outcomes, the PHC credential must be available from a variety of sources. Different issuers will have different standards and procedures for proving user authenticity. These could range from government-issued identity documents and an in-person interview, to versions of decentralized identity relying on digital proofs of interactions like shopping and messaging, documented using digital proofs that can’t be faked by artificial intelligence.

By the same token, services seeking to validate humanity would be free to choose which issuers’ credentials to accept, unleashing competitive dynamics that would motivate provision of PHC services tailored for a variety of applications and users. For instance, a bank might require a PHC issued by a government entity, while a social media site could accept a less rigorous PHC. 

One challenge of the open issuer network is the risk that multiple issuers would issue PHCs to the same natural human, potentially allowing those additional credentials to be misused. This risk is still being tackled by researchers, but the possibility of multiple issuance still represents a significant improvement from the current, unlimited ability of bad actors to impersonate humans online.

Above all, the open nature of PHC issuance would prevent the accrual of more power to governments, providing a free-market alternative to governmental “ministries of truth” exercising anti-democratic information control.

Proving Humanity and Protecting the Information Commons

The internet is reaching a crisis point thanks to the continuing rise of spam, fraudulent content, data leakage, and hacking. The adoption of the PHC credential would benefit the entire digital information and security ecosystem, not merely those who hold or accept the credential.

The PHC would immediately distinguish authentic online content and interactions from automated manipulation, improving the online experience for many users without their own PHC. That’s both because the most authentic content would be easy to spot, and because the very existence of this new form of verification would disincentivize the creation of misleading content.

The PHC would provide this benefit without adding more personal data to “data hoards” likely to be targeted by hackers. Indeed, it’s these very large-scale hacks, such as the recent theft of 3 billion records, including government ID numbers, that are rapidly rendering “knowledge based” security measures obsolete, and better approaches necessary. In this compromised environment, adding the PHC as an access control tool for sensitive online applications would have a substantial impact on hacking and fraud.

For now, the personhood credential is a general proposal, with much work remaining both in designing the overall system and creating specific technical implementations. That means its benefits are still some time in the future, but the online fraud and disinformation it aims to address isn’t going anywhere – if anything, the situation seems poised to get worse. 

SpruceID is proud to have a hand in this major new proposal, and we’ll be contributing our expertise in identity, privacy and encryption to help bring it to fruition. If you see potential for the PHC to strengthen your organization’s digital efforts, please reach out – we’d be excited to learn about your needs, and help you prepare for a more authentic online future.

Read the Full Paper

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions. Learn more on our website.


Dock

The EU Digital Identity Wallet: A Beginner's Guide

With the approval of eIDAS 2, 400 million EU citizens will soon have a EU Digital Identity Wallet containing legal credentials issued by their national governments.  The shift from physical documents to digital IDs is one of the most significant changes in identity history. This evolution requires

With the approval of eIDAS 2, 400 million EU citizens will soon have a EU Digital Identity Wallet containing legal credentials issued by their national governments. 

The shift from physical documents to digital IDs is one of the most significant changes in identity history. This evolution requires ID companies to adapt, innovate, and reimagine the possibilities of digital verification.

The EU Digital Identity Wallet provides a secure and versatile storage for digital credentials. It aims to simplify digital interactions across borders while ensuring interoperability and user control.

In this post, we cover the details of the EU Digital Identity Wallet, including its features, benefits, and applications, so that you gain a comprehensive understanding of it.

Let's dive in: https://www.dock.io/post/eu-digital-identity-wallet


Civic

Tokenized Identity: Permissioned vs Permissionless Assets on Solana with Austin Federa, Solana Foundation

In this episode of Tokenized Identity, Titus Capilnean, our VP of Go-To-Market, speaks with Austin Federa, Head of Strategy at Solana Foundation. They explore the world of permissioned and permissionless assets on Solana, when builders need to move the dial towards adding restrictions to comply with real-world regulations and how this can bring more web2 […] The post Tokenized Identity: Permissi

In this episode of Tokenized Identity, Titus Capilnean, our VP of Go-To-Market, speaks with Austin Federa, Head of Strategy at Solana Foundation. They explore the world of permissioned and permissionless assets on Solana, when builders need to move the dial towards adding restrictions to comply with real-world regulations and how this can bring more web2 […]

The post Tokenized Identity: Permissioned vs Permissionless Assets on Solana with Austin Federa, Solana Foundation appeared first on Civic Technologies, Inc..


Dock

Dock implements BBS as the default signature algorithm in the Anonymous Credentials format

Technology standards are always changing, and it can be expensive for products to keep up. The rate of change is even faster for new technologies with emerging standards, such as the standards for verifiable credentials that are used to create reusable digital identities. Our customers don’t have to

Technology standards are always changing, and it can be expensive for products to keep up. The rate of change is even faster for new technologies with emerging standards, such as the standards for verifiable credentials that are used to create reusable digital identities. Our customers don’t have to worry because our APIs hide the changes in the underlying credential standards. During the April 2024 Internet Identity Workshop, Kazue Sako from Waseda University provided an update on recent developments in BBS cryptography which serves as a good example of the complexity hidden by our products.

Dock’s Anonymous Credentials use an advanced cryptographic signature algorithm that was invented in 2004 and is known as BBS. BBS signatures support advanced privacy capabilities like unlinkable selective disclosure, while also being faster and smaller than other signature algorithms with similar capabilities. However, when BBS was originally proposed no one knew how to mathematically prove the security of the algorithm. Various modifications were made to BBS signatures to make it easier to prove their correctness, and in 2016 a version of the algorithm called BBS+ proved to be efficient enough to be widely used in verifiable credentials. We used BBS+ signatures when we first implemented our Anonymous Credentials format.

A paper published in 2023 includes a proof for the original BBS algorithm while also proposing some efficiency improvements compared to the BBS+ approach to verification of signatures with selective disclosure. Now that BBS signatures are known to be correct, we can use them instead of the BBS+ variant and benefit from the reduced computation requirements. The 2023 variant of BBS replaced BBS+ as the target of standardization at the IETF. We implemented support for BBS2023 last fall, and recently made it the default signature algorithm in the Anonymous Credentials format. This change is transparent to our customers who now use the best version of the algorithm when issuing new credentials while we also ensure that existing credentials remain verifiable.

As you follow our release notes and roadmap updates, you’ll see additional examples of how we track the evolution of identity technologies so that our customers don’t have to.


Gartner Rebuttal: Why Decentralized ID can improve KYC Compliance

In Gartner’s recently released 2024 Market Guide for Decentralized Identity, they suggest that organizations looking to improve their compliance processes with decentralized identity technologies should adopt a skeptical stance. They say: A significant number of vendors claim to have the functionality within their DCI solution to comply with

In Gartner’s recently released 2024 Market Guide for Decentralized Identity, they suggest that organizations looking to improve their compliance processes with decentralized identity technologies should adopt a skeptical stance. They say:

A significant number of vendors claim to have the functionality within their DCI solution to comply with KYC and AML regulations. DCI vendors see this as crucial for making KYC and AML compliance processes more efficient. However, Gartner’s view is that, at this time, banks cannot make a good business case for transitioning away from their traditional compliance process, regardless of its inherent challenges.

At Dock Labs, we regularly speak with organizations who are unhappy with the costs and pains associated with KYC and AML compliance. These forward-thinking organizations find that reusable identity credentials provide them with essential tools to lower the costs of verifying individuals, and improve the experience of the users onboarding to their systems. They get these benefits without increasing fraud or compliance risk while simultaneously improving their compliance with privacy requirements and reducing the cost of protecting user data.

The difference in perspective is that these innovative organizations don’t see DCI as a replacement for existing compliance processes, but as new tools that can augment what is working now. With verifiable credentials as part of their toolbox, IAM practitioners can assemble a better solution than can be obtained solely with traditional compliance processes.

For example, think about opening a savings account online. You will likely be required to follow a traditional approach to compliance which requires a number of steps to verify your identity:

Take a picture of your national identity document and a selfie in order to validate your legal name. That legal name must then be checked against a watchlist of sanctioned people. You will then be asked to enter your mailing information, which will be validated with an address service. You then have to enter a phone number which will be verified by sending you a text message that you must enter into the web site. You will also be asked to enter an email address, which will be verified by sending you a link that you have to click on.

At this point you can finally set up your account. After recently completing this process with a family member, we were offered the opportunity to open a credit card with a partner bank. But we gave up when we found that we would need to go through the whole process again.

I wished that the savings bank would have issued us a credential that would be accepted by the partner bank showing that our legal name, tax number, mailing information, phone number, and email address had already been validated. Accepting the data through a credential would have saved us the hassle of data entry and re-validation, while also ensuring that the partner bank is only using data that has been verified by a trusted source according to the rules of their partnership agreement.

It is true that using credentials does not remove the partner bank’s duty to record their basis for trusting the information. Particularly sensitive checks, such as the watchlist check, may need to be repeated. The referring bank may also charge a fee for the use of the identity credentials that they issued. Regardless, the credential-enabled process is much less painful for everyone involved.

Even Gartner acknowledges that decentralized identity technologies can help streamline regulatory compliance. We wholeheartedly agree with the advice they give near the end of their report, when they say:

Although regulations were initially expected to erect barriers to the adoption of DCI in heavily regulated industries like financial services, new DCI use cases allow organizations to comply with them. SRM leaders should explore how DCI can enable them to comply with regulations more easily, privately, and securely than conventional means.

We at Dock Labs are happy to help organizations stay ahead of their competitors by improving their KYC and AML compliance today.


PingTalk

Session Hijacking - How It Works and How to Prevent It

Learn about session hijacking, detection methods, and prevention techniques to safeguard your digital assets.

A session hijacking attack is one of the more common ways in which malicious actors can commit fraud. It allows black hat hackers to completely bypass secure authentication mechanisms, including multi-factor authentication (MFA) and others. This, in turn, grants access to a user’s secured accounts and systems, which can give attackers free reign to steal sensitive data. These types of attacks pose a serious threat to cybersecurity, both on an individual and organizational scale. The ramifications can include extensive financial losses and long-term damage to an organization’s reputation.

 

You may not be able to prevent your organization from being targeted by session hijacking attacks, but there are steps you can take to recognize these attacks and stop them in their tracks. Keep reading to explore the hallmarks of session hijacking, the various ways it can be attempted, and the prevention methods you can deploy to protect your users and your business.


BlueSky

Highlighting Community Starter Packs

Join a starter pack today!

In June, we released starter packs — personalized invites that allow you to bring friends directly into your slice of Bluesky.

Check out and join some of the starter packs that the Bluesky community has created!

I've made a start, only a few here so far so will keep searching - but if anyone knows any UK MPs I've missed let me know and I will add go.bsky.app/FACCR8t #ukpolitics

[image or embed]

— Geoff (@geoffdeburca.bsky.social) Aug 13, 2024 at 2:31 AM

New here and like comics? Well @gregpak.bsky.social has you covered! Here are two starter sets of folks to follow! First a bunch of creators go.bsky.app/R4eqmGf

[image or embed]

— Adam P. Knave (@adampknave.com) Aug 13, 2024 at 7:44 AM

I have made a ChemSky starter pack and am posting here to help boost visibility. This list is not exhaustive, but should hopefully help newcomers or rejoiners find some accounts and feeds to follow go.bsky.app/C9BtrLj

[image or embed]

— Laura Howes (@laurahowes.bsky.social) Aug 15, 2024 at 11:44 AM

I made a starter pack for those fleeing #EduTwitter and joining #EduSky which should let you find a bunch of good people. go.bsky.app/HQHD4R1

[image or embed]

— Caroline Spalding (@mrsspalding.bsky.social) Aug 15, 2024 at 6:34 AM

Calling all folk with an interest in UK public policy: I’ve created a starter pack of think tankers, policy analysts & commentators active on @bsky.app go.bsky.app/LtNiL1o

[image or embed]

— Jessica Studdert (@jesstud.bsky.social) Aug 14, 2024 at 7:46 AM

starter pack of OC artists who are under 100 followers at the time of making this list! 🩷 go.bsky.app/6LGDx5g

[image or embed]

— Saba 🏳️‍🌈 (@ace-of-dragons.bsky.social) Aug 14, 2024 at 11:36 AM

Starter pack for #nufc fans here. go.bsky.app/HmjNT4

[image or embed]

— Kev Lawson (@editkev.football) Aug 11, 2024 at 2:09 PM

I love this starter pack business, so I've made one of some of the women I follow on here (including the estate of Ursula K Le Guin because I'm obsessed). I'm sure I'm missing a ton of great people. Anyone else I should include? go.bsky.app/2rubRr3

[image or embed]

— Alona Ferber (@aloner.bsky.social) Aug 15, 2024 at 6:21 AM

Starter Pack for Seismology and Earthquake people. Add missing accounts in the comments and I'll add them to the pack! ⚒️🧪 #Geology go.bsky.app/ND4oS9k

[image or embed]

— Henning ⚒️ (@geohenning.bsky.social) Aug 12, 2024 at 11:28 AM

Find more communities directly on Bluesky! See you there: bsky.app.

Thursday, 15. August 2024

Spruce Systems

Navigating the Jungle of Digital Credential Standards

SpruceID's multi-standard approach to digital identity credentials prioritizes user convenience, privacy, security, and sustainability, ensuring long-term functionality and adaptability.

The ongoing transition towards digital identity credentials will have many benefits for users and society, from increased privacy to preventing disinformation. The first form of digital credential that’s reaching the public is the mobile driver’s license, currently being piloted by several U.S. states. But there are many other potential digital credentials, from professional licenses and degrees to simple event passes, each with its own nuances. 

The builders architecting these systems, often from the ground up, face a challenge: choosing the right technical standard for presenting data. Standards enable the open, interoperable nature of digital identity systems, making sure potentially countless credential issuers, holders, and verifiers overseeing a huge variety of digital credentials are all on the same page. 

Digital credentials will eventually include not just driver’s licenses but more niche certifications from food handling to off-road vehicle training to professional affiliations. Agents handling related credentials will have to speak the same language – that is, use the same data standard – to interact in a smooth and trustworthy way. Email is another technology that runs on a shared data standard, which is why it can be sent from a Gmail account but still be readable via Hotmail or any other email service. 

For better or worse, though, the world of digital credential standards is already wildly fragmented. For instance, there are already no fewer than two digital formats to verify educational credentials: OpenBadges and the European Digital Credential. A recent report from the European Union Agency for Cybersecurity (ENISA) describes six different formal standards for digital identity credentials, among them the International Organization for Standards’ (ISO) Mobile Driver’s License standard (mDL); standards under the EU’s eIDAS authority; and both OpenID and FIDO2 formats for online identity and security. And that’s just the tip of the iceberg. 

The choice of standard will also be shaped by the scope and nature of a project: Standards can be built for very specific and similar purposes, or they can be generalized and overlapping. Further, while some standards will grow into thriving ecosystems, others may fall to the wayside, just like the Betamax videotape standard. These and other factors can make choosing the right credential standard to build a system feel simultaneously very important and difficult.

But at SpruceID, we’ve taken a different approach to the stressful quandary of digital credential standards. Rather than choosing one standard to build our tools around, we integrate multiple standards that meet our goals for user convenience, privacy, sustainability, and security. This ensures our customers get what they need today and that our systems will still be functional tomorrow—even in the (unlikely!) event that we’re not around to maintain them.

Real Results Beat Abstract Superiority

The biggest pitfall when evaluating standards is trying to decide which one is the “best,” whether for your application or in general. The truth is that even if one technical roadmap offers clear advantages over another, parallel questions such as adoption rates and integrations can trump those concerns. The technically superior standard simply doesn’t always win—just ask BetaMax, which lost the fight with VHS despite being better in every way.

So, instead of looking for some abstract “best,” here at SpruceID, we focus on whether each standard adequately provides four things: utility, privacy, security, and sustainability. Our systems integrate multiple standards that fulfill those needs and let users issue, manage, or verify credentials in all the supported standards formats.

Privacy, in particular, is a major motive for the overall shift to digital identity, which opens up new possibilities for users to control information about themselves carefully. Our North Star is scholar Helen Nissenbaum, who emphasizes the importance of social context for our sense of privacy. Older, analog forms of identity could ‘leak’ information in the wrong context, and some early digital ID systems could reveal too much data about a user’s activities to credential issuers. 

But good digital identity standards give users control over precisely what data they’re sharing and when – including protecting them from uninvited monitoring, even from state authorities. Standards that protect user privacy and enable selective disclosure include ISO’s mDL standard and the World Wide Web Consortium (W3C) Verifiable credentials format.

Similarly, standards must allow secure implementations. That doesn’t just mean that their cryptographic verification processes are sound—that’s important but relatively straightforward to assess. More subtle risks can lurk in how a standard shapes the storage and sharing of data: as an extreme example, fully centralized identity databases present serious risks to users' privacy. 

It’s worth noting here that there’s a nuanced relationship between all these standards and their even more varied implementations – that is, the actual code and systems that use the standards. It’s not hard to take a potentially secure and private identity standard and build a system around it that undermines those virtues, but our multi-standard strategy is focused on the core architecture and making sure our own tools implement it in the best possible way.

We Can Rebuild it. We Have the Standards.

The third minimum requirement for a standard to pass muster at SpruceID is that it offers inherent resilience. Above all, this means that it doesn’t depend on any one technology operator to keep functioning and that even if our own front-end system were to vanish, users would still be able to use and trust the same credentials they had been using with SpruceID. From this perspective, a counterpart to resilience is scalability.  That is, how easy it is for a new actor to adopt the standard and provide services using it – including filling gaps that might appear if other ecosystem players were to go away.

If a digital credential system is a network that carries information, you can think of it like a 19th-century railroad. It’s made up of trains and conductors and rails and stations - things you can see and touch. But it’s also made up of standards that underpin all that hardware - technical standards like track width and signaling technology and standardized ways the railroad is scheduled and operated. 

In the old days, railroads competed fiercely, and the utility, depth, and trustworthiness of those standards, including how well they allowed different systems to interact, played a big role in which railroads survived. Railroads with strong standards would be more likely to work well with other systems and make it easier for new operators to rebuild, bailout, or take over. To pick the most obvious example, a railroad that decided its tracks would be twelve inches wide when locomotive manufacturers were churning out dining cars for tracks four feet across would be far less resilient or scalable because of that choice.

Our approach to standards is based on the idea that things can cut in the other direction, as well: If one standard disappears or loses relevance, our systems will still have a second set of rails built to other workable standards. This is a key advantage of implementing multiple standards through one tool.

But the truly big unlock is the peace of mind of not having to worry too much about which standard is “best,” maybe even before you even know what your customers and users will need.

Our priority is answering those specifics and making sure our implementation translates a general format into the best possible user experience. This includes assurances that their data is safely in their control and will be useful for the long haul, regardless of which invisible data formats win the long-term digital identity race.

Want to learn more or discuss your specific use case? Contact us to continue the conversation.

Get in Touch

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.


Indicio

Senior DevOps Engineer (Remote)

Work with the Director of Sales to support him in day-to-day responsibilities including... The post Senior DevOps Engineer (Remote) appeared first on Indicio.

Senior DevOps (Remote)

 

Job Description

 We are the world’s leading verifiable data technology. We bring complete solutions that fit into an organization’s existing technology stack, delivering secure, trustworthy, verifiable information. Our Indicio Proven® flagship product removes complexity and reduces fraud. With Indicio Proven® you can build seamless processes to deliver best-in-class verifiable data products and services.

As a DevOps Engineer, you will play a crucial role in bridging the gap between development and operations. You will be responsible for designing, implementing, and managing our cloud infrastructure, building pipelines, and deployment strategies. Your expertise in Linux system administration, containerization, and cloud platforms will be vital in maintaining efficient and scalable development environments.

As a rapidly growing startup we need team members who can work in a fast-paced environment, produce high quality work on time, work without supervision, show initiative, innovate, be laser focused on results, and have outstanding communication skills. Indicio is a fully remote global team (our Maryland colleagues have a co-working space) and our clients are located around the world. You will create lasting impact and see the results of your work immediately. 

Responsibilities

Infrastructure Management: Design, deploy, and manage cloud infrastructure on AWS and GCP. Provision cloud resources and ensure the scalability, reliability, and performance of our systems. Build and Deployment Pipelines: Develop and manage build pipelines using tools like Jenkins, Github Actions, GitLab CI/CD, or similar. Ensure automated and reliable software delivery processes. Autoscaling and Monitoring: Implement auto scaling solutions to handle varying workloads. Set up and manage logging and monitoring infrastructure to ensure system health and performance. Development Support: Collaborate with development teams to manage and optimize development environments. Assist in debugging by gathering and analyzing data from various sources. Participate in incident management and resolution. Documentation and Best Practices: Create and maintain documentation for infrastructure and deployment processes. Advocate for and implement best practices in DevOps and continuous integration/continuous deployment (CI/CD). Qualifications Linux System Administration: Strong experience in Linux system administration, including configuration, troubleshooting, and performance tuning (required) Containerization: Proficiency with Docker and container orchestration platforms (required). Build Pipelines: Experience with CI/CD tools and building automated pipelines (required). Version Control: Proficiency with Git for version control (required). Cloud Platforms: Hands-on experience with AWS and/or GCP, including provisioning and managing cloud resources (required). Autoscaling: Knowledge of autoscaling mechanisms and strategies (required). Logging and Monitoring: Experience with logging and monitoring tools (e.g., ELK stack, Prometheus, Grafana) (required).

Apply today!

 

The post Senior DevOps Engineer (Remote) appeared first on Indicio.


KuppingerCole

From Directive to Action: The Value of Draft Documents in Navigating the NIS2 Compliance Challenge

by Matthias Reinwarth Organizations across Europe are in the midst of a challenging process—implementing the requirements of the NIS2 Directive. This EU-wide cybersecurity legislation, which took effect on January 16, 2023, demands significant and broad-ranging compliance efforts. However, a key obstacle remains: the EU, or more specifically the member states who need to translate this into natio

by Matthias Reinwarth

Organizations across Europe are in the midst of a challenging process—implementing the requirements of the NIS2 Directive. This EU-wide cybersecurity legislation, which took effect on January 16, 2023, demands significant and broad-ranging compliance efforts. However, a key obstacle remains: the EU, or more specifically the member states who need to translate this into national legislation, have yet to provide detailed guidance on what organizations must do to comply. This ambiguity leaves much to interpretation, creating a fertile ground for third-party recommendations and, perhaps, confusion.

The Countdown to Compliance

NIS2 affects a wide range of companies, far more than its predecessor, the original NIS Directive. The clock is ticking, with the October 18, 2024, deadline for implementation drawing near. The problem? NIS2 requires that all member states integrate its provisions into their national cybersecurity laws, a process that is still incomplete in several countries. While some, like Germany, are nearing the final stages of this legislative adoption, the lack of detailed guidance has led many organizations to mainly rely on established control frameworks to meet the directive's broad requirements. And the NIS2 policy makes several references to the ISO27000 family of documents, for example, as a source of best practice.

Many EU member states are still not fully prepared, and the directive’s broad, somewhat generic provisions—such as those in Article 21—leave organizations guessing what specific measures to take. Companies must adopt an all-hazards approach, addressing everything from risk analysis and incident handling to business continuity and supply chain security. Yet, the details of what this entails are sparse.

One exception to this lack of specificity is the requirement for multi-factor authentication (MFA), which NIS2 explicitly mentions. Beyond that, however, companies are left to navigate a landscape of general directives, hoping that their interpretations will suffice.

A Glimmer of Guidance

Amid this uncertainty, a notable development has quietly emerged. The European Commission recently published a draft Implementing Regulation (IR) that could bring much-needed clarity - though only for a narrow subset of entities within the digital infrastructure sector, such as cloud computing providers, DNS service providers, and online marketplaces. The draft includes an Annex with detailed controls, providing a level of specificity that many have been craving.

For example, in the realm of Identity and Access Management (IAM), where Article 21 (2) of NIS2 vaguely calls for “access control policies,” the Annex goes much further. It dedicates three full pages to detailed, actionable requirements. These include the need to establish and implement logical and physical access control policies for network and information systems, addressing access by people and processes, and ensuring access is granted only after proper authentication. The document demands the regular review and update of these policies, and the management of access rights based on principles like least privilege and separation of duties. It even specifies requirements for privileged accounts, system administration systems, and the life cycle management of identities, including secure authentication procedures.

And there is much more, as this was just a single example.

Article 3 of the main draft document defines basic criteria for identifying "significant incidents". And who was not looking for such a definition (although even those could be clearer)?

Want more? Chapter 3 of the Annex provides another three pages of incident management controls from establishing a comprehensive incident handling policy (including clear roles, responsibilities, and procedures for detecting, analyzing, responding to, and reporting incidents) to post-incident reviews.

This level of detail, while only applicable to the given list of specific sectors when approved, provides a solid foundation for organizations preparing for NIS2 compliance. It offers a glimpse into what might become the standard for other industries and states as well.

The Broader Implications

If approved and finalized, this draft regulation will only apply to certain sectors, but it's easy to see how it could serve as a blueprint for broader national legislation. The specificity it offers contrasts sharply with the general nature of NIS2 itself, making it a valuable resource for organizations seeking to align with the directive’s requirements. Indeed, as more countries finalize their national implementations of NIS2, it is likely that they will look to this draft IR as a model for their own regulatory frameworks.

However, it’s again important to note that this regulation is still in draft form. And it will only directly affect multinational organizations in the digital infrastructure sector that would otherwise fall through the regulatory cracks. But even in its current state, not yet applicable and with limited scope, the draft IR makes sense. It’s a step toward the clarity and guidance that practitioners - especially those on the front lines of cybersecurity - desperately need.

A Practitioner’s Take

As someone who is not a lawyer but looks at these regulations from both an analyst's and a practitioner's perspective, I see the value in any document that offers a reasonable level of detail. For organizations struggling to prepare for NIS2, the draft IR’s specificity provides a welcome roadmap. It’s likely that we’ll see elements of this document adopted more broadly, shaping the way national legislations and implementation procedures evolve.

In the meantime, organizations might want to keep a close eye on developments around this draft IR. While it is - yes - still a draft and a future applicability will be limited, the clarity it offers could soon extend to a much wider audience, helping to dispel some of the uncertainty surrounding NIS2 compliance.


Elliptic

As the US election nears, AI political deepfake scams are targeting crypto users

Crypto has taken a prominent stage in the US election campaign, with Donald Trump and Robert F. Kennedy Jr. attending Bitcoin 2024 in Nashville and Kamala Harris reportedly set to soften her stance on blockchain technologies. This increasing interest in the benefits of crypto, and how it can be safe and accessible to everyone, is welcome. As with any major event or new technology – be

Crypto has taken a prominent stage in the US election campaign, with Donald Trump and Robert F. Kennedy Jr. attending Bitcoin 2024 in Nashville and Kamala Harris reportedly set to soften her stance on blockchain technologies.

This increasing interest in the benefits of crypto, and how it can be safe and accessible to everyone, is welcome. As with any major event or new technology – be it elections, pandemics, conflict or AI – a small minority of illicit actors will nevertheless seek to capitalize on these developments to defraud innocent victims out of their funds.

Amid an increase in election-related scam activity, Elliptic advises both new and experienced crypto users to be vigilant of suspicious deepfake videos and investment opportunities – as well as familiarizing themselves with their red-flag indicators.

Latest identified scams indicate that fraudsters are exploiting Trump’s Bitcoin 2024 speech, his nomination of crypto-friendly running mate JD Vance and Elon Musk’s recent endorsement as a means of luring interested individuals into “get rich quick” schemes. Scammers are using AI-generated deepfakes to manipulate speeches of individuals such as Trump and Musk to depict them as promoting fake crypto investment sites.

As the Democrats launch a “Crypto for Harris” initiative, it is possible that the Vice President’s likeness may also be exploited by deepfake scammers throughout her campaign.

Elliptic has recently published a report into AI-enabled crime in the cryptocurrency ecosystem. Download your copy here.


Ocean Protocol

Introducing Ocean Nodes: Decentralized practical solution for building powerful AI

This blogpost will detail how Ocean Nodes work, the benefits they bring to the community, our launch roadmap, incentives for participation, and opportunities for early adopters. We’re thrilled to share some exciting news from Ocean Protocol. After months of development and community feedback, we are launching Ocean Nodes — a powerful, decentralized solution to streamline and enhance your AI model
This blogpost will detail how Ocean Nodes work, the benefits they bring to the community, our launch roadmap, incentives for participation, and opportunities for early adopters.

We’re thrilled to share some exciting news from Ocean Protocol. After months of development and community feedback, we are launching Ocean Nodes — a powerful, decentralized solution to streamline and enhance your AI model development. Ocean Nodes is designed to simplify the process of leveraging Ocean Protocol’s capabilities, making it easier than ever to build, deploy, and monetize AI models.

AI technology is everywhere. Machine learning models have been integrated into our lives for some time now — it’s in our smartphones (camera image processing, keyboard autocomplete, etc.), search engines, transportation apps, and more. While these technologies have been seamlessly integrated into our lives, using them to simplify our day-to-day tasks, recent advancements like OpenAI’s generative pre-trained transformer (GPT) have brought ML into the spotlight. Now, other major players are investing heavily in these solutions.

So we are taking it one step further, with an innovative solution to democratize these large models, decentralize them, and help monetize and protect IP: Ocean Nodes.

This launch is a significant milestone for us, aligning perfectly with our broader goals for 2024. As highlighted in our roadmap, specifically the goal to launch C2D.2, Ocean Nodes are the foundation for achieving scalable Compute-to-Data (C2D) technology.

This blogpost will detail how Ocean Nodes work, the benefits they bring to the community, our launch roadmap, incentives for participation, and opportunities for early adopters.

The Ocean Nodes

Ocean Nodes have been designed to provide the developer community with a smoother experience, whilst introducing a way to monetize independent computational resources through its infrastructure. It allows individuals to run all components of the Ocean Protocol stack such as Ocean Provider, Aquarius and Compute-to-Data, in a single component — effectively simplifying the process of using Ocean to manage data sharing on decentralized rails. On top of this, any GPU provider will soon be able to share & monetize their computational resources directly via Ocean Protocol — ultimately lowering the barrier to entry for everyone to participate in the new data economy.

The launch of the Ocean Nodes marks a significant step for developers, data scientists and large organizations interested in running computation directly on-chain whilst demanding reliability and scalability of traditional Web2 infrastructure — all whilst keeping privacy as a first-class citizen.

We created this component with a modular approach in mind. You can install it and run it just as a provider, with minimal resources needed, or you can run the full node with indexer and C2D.

Benefits for the Ocean Nodes User

Simplified Workflow: Run all essential components of the Ocean Protocol stack, such as Ocean Provider, Aquarius, and Compute-to-Data, in a single setup Monetization Opportunities: Easily monetize independent computational resources Enhanced Privacy: Prioritize privacy with advanced encryption features Scalability: Benefit from a scalable solution that integrates seamlessly with existing infrastructure Ease of Use: Designed to be user-friendly with a modular approach for flexibility Early Adopter Rewards: Exclusive incentives and rewards for early participants Launch roadmap

We value the power of community feedback, so instead of waiting for the final version, we are implementing a three-stage release to gather your valuable insights and improve the product as we go.

Phase 1 — A new beginning

The first stage is all about creating a vertically and horizontally scalable component that integrates all previous off-chain elements, making it easy for developers to run the Ocean infrastructure.

Phase 2 — Decentralized Encryption

Expected in Q3 2024, this phase will introduce advanced privacy features through an integration with Oasis Protocol. In this release the Ocean Provider will be run using the Oasis Sapphire SDK for asset encryption and decryption, enhancing node security and trust.

Previously, encryption relied on a private key on the desired Ocean Provider, which (1) required trusting the provider and (2) if that provider went down, the user would have had to republish the asset with a new provider. With the new system, encryption is handled by Oasis Sapphire, eliminating dependency on specific nodes.

We have mitigated trust issues by making nodes easy to install with minimal resources and by introducing a new way of setting trusted nodes using NFTs. Anyone can create a trust list, and only nodes on that list can decrypt and serve the asset, ensuring enhanced security and reliability.

Phase 3 — Scalable Compute-to-Data Technology

The final release of Ocean Nodes will include a full refactoring of the Compute-to-Data technology, which will enable computation on data sources hosted on traditional data infrastructures.

This enhancement allows the Ocean Protocol infrastructure to scale horizontally beyond Web3, empowering traditional Web2 organizations to save costs, maintain their existing infrastructure whilst benefiting from the permissionless, verifiability, immutability, and privacy features offered by Ocean.

Compute-to-Data 2.0 (C2D.2) will be an additional component provided by Ocean Nodes, and will also allow everyone to monetize their idle computing power by offering CPU/GPU resources.

Incentives — starting August 29nd, 2024

We will have multiple incentive mechanisms during the lifetime of the nodes to help grow a sustainable ecosystem. These incentive mechanisms will change and evolve during time depending on the needs.

The first will be based on uptime.

The Ocean Protocol Foundation will allocate 5,000 $FET each week to Ocean Nodes that demonstrate a high level of uptime. Rewards will be calculated weekly using the formula:

R0 = Xt * U0 / Ut

Where:

R0 = Total Rewards earned Xt = Total Rewards available U0 = Node Uptime in seconds Ut = Total Uptime per week, in seconds

To prevent network exploitation by malicious actors, the OPF Benchmark Backend will regularly monitor and record node uptime on the dedicated dashboard. More details about the incentives in the dedicated blog post.

Early adopters

As Ocean Protocol remains committed to reward early adopters, we invite you to join us in the first month of each phase launch to receive exclusive Soulbound Tokens (SBTs) for nodes with the highest uptime.
A maximum of 50 nodes will be awarded one SBT per stage. These tokens will offer a reward multiplier and will be awarded to Ocean Nodes who remain active throughout each launch stage.

Owning all three SBTs will grant a maximum reward multiplier of 2x. Owning fewer than three SBTs will follow the structure below:

Phase 1 Launch SBT: 1.5x Reward Multiplier Phase 2 Launch SBT: 1.3x Reward Multiplier Phase 3 Launch SBT: 1.2x Reward Multiplier

Our team has designed Ocean Nodes to be both user-friendly and powerful, addressing the needs of developers, data scientists, and organizations. With Ocean Nodes, you can now run all essential components of the Ocean Protocol stack in a single, streamlined setup. This not only simplifies your workflow but also opens up new opportunities for monetizing computational resources.

By getting started with Ocean Nodes now, you’ll be well-prepared to leverage the full potential of C2D as it evolves, ensuring you stay ahead in the rapidly changing landscape of AI and data sharing.

To start running your node today access the Ocean Nodes README, and follow the Quickstart guide available in the main repository for detailed instructions on deployment.

For more updates and detailed guides on setting up and running your nodes follow us on our dedicated X profile, and join the discussion in our Discord Server.

About Ocean Protocol

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Ocean Protocol is a founding member of the ASI Alliance.

Follow Ocean on Twitter or Telegram to keep up to date, and Predictoor’s Twitter for its news. Chat directly with the Ocean community on Discord. Track Ocean’s tech progress directly on GitHub.

Introducing Ocean Nodes: Decentralized practical solution for building powerful AI was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Ontology

Apple Opens NFC Chip

Implications for Decentralized Identity and Contactless Technology Apple has announced a significant change in its approach to Near Field Communication (NFC) technology on iPhones. Starting with iOS 18.1, Apple will open up the iPhone’s NFC chip and Secure Element to third-party developers, allowing for contactless transactions outside of Apple Pay and Apple Wallet. This move has far-reachi
Implications for Decentralized Identity and Contactless Technology

Apple has announced a significant change in its approach to Near Field Communication (NFC) technology on iPhones. Starting with iOS 18.1, Apple will open up the iPhone’s NFC chip and Secure Element to third-party developers, allowing for contactless transactions outside of Apple Pay and Apple Wallet.

This move has far-reaching implications for the future of digital identity and contactless technology.

Decentralized Identity and NFC

The opening of Apple’s NFC chip aligns closely with the principles of decentralized identity, a framework that gives individuals control over their personal data and identity verification. With this new development, developers can create applications that leverage NFC technology for various identity-related purposes, including:

Digital IDs and passports Corporate badges and student IDs Hotel room keys and home access Loyalty programs and event tickets

This shift towards decentralized identity solutions using NFC technology could revolutionize how we manage and verify our digital identities in both online and offline environments.

Impact on Contactless Payments

The opening of Apple’s NFC chip will create new opportunities for contactless payments. Banks and other financial services can now develop their own NFC-based payment solutions, potentially increasing competition in the mobile payments space. This could lead to more innovative payment options for consumers and businesses alike.

Implications for Developers and Businesses

Developers will need to enter into commercial agreements with Apple and pay associated fees to access the NFC and Secure Element APIs. This new capability will be available in several countries, including Australia, Brazil, Canada, Japan, New Zealand, the UK, and the US.

For businesses, this change opens up new possibilities for customer interaction and service delivery. From seamless check-ins at hotels to enhanced loyalty programs, the potential applications are vast.

The Future of Digital Identity

As we move towards a more digitally integrated world, the combination of NFC technology and decentralized identity principles could pave the way for more secure, user-controlled digital identities. This aligns with initiatives like the EU Digital Identity Wallet, signaling a broader shift in how we manage and verify identities in the digital age.

In conclusion, Apple’s decision to open up its NFC chip represents a significant step towards a more open and interoperable ecosystem for digital identity and contactless technology. As this technology evolves, we can expect to see innovative applications that enhance security, privacy, and user convenience across various sectors.

Apple Opens NFC Chip was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


Ocean Protocol

DF102 Completes and DF103 Launches

Predictoor DF102 rewards available. DF103 runs Aug 15 — Aug 22, 2024 1. Overview Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor. Data Farming Round 102 (DF102) has completed. DF103 is live today, Aug 15. It concludes on August 22. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE re
Predictoor DF102 rewards available. DF103 runs Aug 15 — Aug 22, 2024 1. Overview

Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor.

Data Farming Round 102 (DF102) has completed.

DF103 is live today, Aug 15. It concludes on August 22. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE rewards.

2. DF structure

The reward structure for DF103 is comprised solely of Predictoor DF rewards.

Predictoor DF: Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn.

3. How to Earn Rewards, and Claim Them

Predictoor DF: To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs.

4. Specific Parameters for DF103

Budget. Predictoor DF: 37.5K OCEAN + 20K ROSE

Networks. Predictoor DF applies to activity on Oasis Sapphire. Here is more information about Ocean deployments to networks.

Predictoor DF rewards are calculated as follows:

First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards. Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards.

Expect further evolution in DF: adding new streams and budget adjustments among streams.

Updates are always announced at the beginning of a round, if not sooner.

About Ocean, DF and Predictoor

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord. Ocean is part of the Artificial Superintelligence Alliance.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

DF102 Completes and DF103 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Okta

Approaches to keep sending OTP over SMS... for now

Table of Contents Approaches to keep sending OTP over SMS… for now SMS/Voice is too SIMple Hooked on telephony Which regions? How many messages? How reliable? From you or Okta? How secure? How many people? Designing a DIY Hook Handling failover to Okta Vendors Telephony providers

Table of Contents

Approaches to keep sending OTP over SMS… for now SMS/Voice is too SIMple Hooked on telephony Which regions? How many messages? How reliable? From you or Okta? How secure? How many people? Designing a DIY Hook Handling failover to Okta Vendors Telephony providers Consultants Services What Next? Approaches to keep sending OTP over SMS… for now

“SMS has long played an important role as a universally applicable method of verifying a user’s identity via one-time passcodes. And over the last decade, SMS and voice-based Multifactor Authentication has prevented untold attempts to compromise user accounts.

But it’s time to move on.”

– Ben King, VP Customer Trust: BYO Telephony and the future of SMS at Okta

SMS/Voice is too SIMple

The one-time passcode (OTP) you send using SMS or Voice may not go to the phone you want. SIM swapping–stealing someone else’s phone number–lets bad actors receive the message or call with the code. They’re one step closer to breaking into your system. And if all it takes is an account name and OTP, they may succeed. And it’s not just SIM hacking; other issues include:

No phishing resistance

No control of the channel for sending secrets

No way to link a user to their device

Longer login times than other methods

Okta recommended moving away from SMS/Voice authentication some time ago. There are many other factors you can use for authentication, including:

Generating codes in an authenticator app such as Okta Verify, Authy, Google Authenticator, or 1Password.

FIDO2.0 (WebAuthn) which, in addition to phones, can use hardware keys and on-device authenticators.

Soon, Okta will require you to bring your own telephony provider to keep sending those codes. If you need time to move to a different method of verifying identity, you must configure your own provider for SMS/Voice.

Hooked on telephony

You can send the OTP in the SMS/Voice flow using the telephony inline hook. Okta uses the code or URL in the hook to send the OTP, though, as you’ll see, the hook may not be called every time (and that’s a good thing). When your hook fails to send the message or takes too long to update the status, Okta takes over sending the message. However, the number of those messages is heavily rate-limited.

The code or URL you provide may simply send the message and communicate the outcome to Okta. The code or server may be more complex, managing geo-specific vendors, failure, failover to another provider, and hacking. No matter how easy or complex the code, there are three main approaches:

Implement the code and use your own telephony provider or providers.

Outsource the implementation and use your own telephony provider or providers.

Use a managed service that manages the process for you.

Some of the main things to consider when choosing an approach are the regions for messages, the expected traffic, the desired reliability, branding requirements, protection from hacking, and your resources.

Which regions?

Two things can identify a region. First are any regulations for sending messages. Those regulations can be set by collectives, such as the European Union, countries, or even sub-parts of a country. Second is the area covered by the telco sending the message.

Sending messages to more than one region may have at least two impacts. First, check that your desired vendor or vendors cover those regions.

Second, the features and regulations for traffic may differ from region to region. Some of the differences include:

Limitations on the types of entities that can send messages by SMS. This typically requires proof of identity and business registration.

Registration of a sender ID for your business. For example, messages without a valid sender ID are automatically marked as “Likely-SCAM” in Singapore.

Using short codes–special telephone numbers designed for high traffic. This can add significant cost.

Supported formats, such as ASCII and Unicode.

Character length limits for messages. Note that each Unicode item counts as two characters.

Check that your vendor supports the regulations in your desired regions.

How many messages?

Telephony vendors or service providers need to know the volume of messages. And not just the average volume, but any peaks, such as a time when a majority of people are trying to sign on to your network.

The service cost is the most obvious related to volume issues, but there are two others. First is the impact on the rate limits used to prevent spam texts. These limits can prevent messages from being sent, especially during peak volume; vendors may be able to increase limits.

The second impact, the reputation score, also limits the volume of messages. The lower the reputation score, the fewer messages you can send. The goal is to prevent bad actors from sending lots of spam. Newer and smaller companies start with a lower score. The score increases over time as you send messages without hitting rate limits.

Some telephony vendors or service providers can work around this limit. For example, a service provider may use their reputation or send it from a pool of phone numbers.

How reliable?

Delivering the OTP to a phone requires several steps, and any of them can fail. The more steps, the more code between the OTP and the requestor, and the more chances of failure.

Most telephony and other service providers provide a service level agreement (SLA). Availability (or uptime) is the most common measurement: the percentage of time a service can receive your request and send the message. But there are other things to consider: delivery time, knowing if it’s delivered or not, and round-trip time (total time from request to notification of outcome).

That last number is important as there’s a time limit of three seconds from Okta calling the hook to receiving a success (or failure) result. After that, the default is that Okta sends the message using its providers. However, those sends are heavily rate-limited.

From you or Okta?

Implementing the code for the hook yourself or using a consultant gives you the most control over message content. Services may offer partial or complete content customization.

You can customize the SMS messages sent by the Okta failover mechanism, though not the voice calls.

How secure?

Okta still rate-limits calls to the telephony hook to prevent spam or toll fraud. But that’s not the only security issue.

Whether you implement the hook yourself or use a service, the endpoints and calls must be protected from attacks. That includes protecting any API keys and preventing unauthorized access and use.

There are also accounts with the provider or service that must be secured.

How many people?

No matter the other concerns you identified, processes will change and update, and new things will need to be done.

New message flows and failovers require updating existing support processes for SMS/Voice users. This may include working with your chosen telephony or service vendor. You may also need to add more frequent log monitoring to detect when the failover rate limit prevents Okta from sending messages.

Vendors need management. Projects for implementing the chosen approach need planning and project management. The resources for the implementation phase vary significantly.

Implementing custom code is similar to adding a somewhat complex feature to your product: it requires product management/specification, design, engineering, testing, and project management. Outsourcing the implementation can reduce the technical resources but adds vendor management.

Moving to a service provider minimizes the technical requirements, though there’s still vendor management and monitoring.

Designing a DIY Hook

The first step in implementing a telephony hook is finding a vendor. There are at least three essential criteria:

Send messages to the desired regions

Meet reliability requirements, especially when handling failover

Allow the desired volume of messages

That last point is because some vendors limit the volume for smaller or unknown companies.

The server you write for the telephony hook uses the information received from Okta to construct a message request to your vendor. The status of the message also needs to be communicated back to Okta. Sometimes, this requires translating the data from the telephony provider into the JSON format expected by Okta.

Handling failover to Okta

Another case you must handle is a failover to Okta. Failover happens when something goes wrong with your telephony hook. Okta takes over sending the message, but the number of messages is heavily rate-limited. The only way to determine if the message was sent is by searching the logs to see when sends started failing. Your messages may never arrive.

There are two triggers for failover: your telephony hook returns a “failed” status to Okta, or a three-second timeout passes.

You can prevent failover by always returning a successful result or requesting Okta to disable failover for your organization. However, doing so means that you must handle failed message sends. That requires more complex server code and possibly multiple vendors.

Vendors

The kind of vendors you need depends on your approach. Below are a few possibilities. Some are recommendations from Okta, and others are suggestions. No matter what, make sure that the vendor meets your criteria.

Telephony providers

Here are some vendors you can use to implement the hook in-house or with a consultant.

Telesign

Twillio

Vonage

Consultants

Many consulting companies can implement the hook for you. Another option is to use Okta professional services.

Services

Some services deliver the SMS for you. That can include handling unavailable telephony vendors, resends, and other issues. Adding a service usually requires only adding a URL for the telephony hook.

Services include:

AWS Pinpoint

BeyondID

Twilio Verify

What Next?

If you rely on SMS for authentication, start thinking about how to replace it. In the meantime, use what you’ve learned in this post to keep your solutions as secure as possible.

For more content like this, follow Okta Developer on Twitter and subscribe to our YouTube channel. If you have any questions about migrating away from SMS, please comment below!


PingTalk

What is Segregation of Duties?

Read this blog to understand what Segregation of Duties is and why it’s a critical piece of identity security for today’s enterprises.

In today’s fast-paced, technology-driven business landscape, maintaining robust security and compliance protocols is paramount. One critical concept that organizations adopt to safeguard their operations is Segregation of Duties (SoD). Read on to understand what Segregation of Duties is, why Segregation of Duties is an important policy in today’s world, and how modern tools and technologies can facilitate its effective implementation.

Wednesday, 14. August 2024

HYPR

Going Passwordless: 6 Tips to Navigate Passkey Adoption

By now, most of us realize that passkeys and passwordless authentication beat passwords in nearly every way — they’re more secure, resist phishing and theft, and eliminate the need to remember and type in an ever-growing string of characters. Despite this, most organizations still rely on password-based authentication methods.

By now, most of us realize that passkeys and passwordless authentication beat passwords in nearly every way — they’re more secure, resist phishing and theft, and eliminate the need to remember and type in an ever-growing string of characters. Despite this, most organizations still rely on password-based authentication methods.

Transitioning to passwordless authentication offers a far more secure and user-friendly experience, but making the switch can seem daunting. In fact, the most recent Passwordless Identity Assurance survey found that nearly one third (31%) of organizations name implementing passkeys as a primary identity security challenge.

Technical integration is only one aspect. For many organizations, rolling it out to users and getting them to use it can be the thornier part.

Understanding User Adoption

User resistance to an unfamiliar technology can be a  hurdle in transitioning to passwordless. It’s critical to take a phased, change management approach, including pilot programs and early adopter groups. Clear communication about the benefits of passkey systems, referencing successful case studies, and industry best practices, helps allay user skepticism and increase acceptance.

User-centric design and understanding the psychology of habit formation are essential to achieve widespread adoption. User experience greatly impacts a passwordless initiative — balancing security and convenience is key. Consider and address your varying use cases, potential accessibility issues, and technical challenges, such as legacy systems.  

As Director of Customer Excellence at  HYPR, I’ve worked with many customers during their passwordless transition. As someone with even more years in IAM and customer experience in general, I’ve seen and heard many tech rollout tales. Here are some of the top tips to help your organization navigate passkey adoption effectively.

Six Best Practices for Passkey Adoption 1. Map Out Use Cases

Different user groups within an organization may have varying needs, both in their job function and as individuals. When it comes to passkey adoption, one size doesn’t fit all. Multiple passwordless options may be required.

Begin by getting a full picture of your current login methods. Identify priority login systems and stakeholders. Do you haveremote or hybrid employees?. What IdPs, devices, browsers and operating systems are being used? Consider non-employees like contractors, business partners, or volunteers — when and how do they log in? For users who travel extensively, note any special authentication requirements.

PRO TIP: Identify any applications, systems, or tools with additional authentication controls due to sensitive data. Evaluate specific user accounts, like IT administrators, with higher security needs.

2. Identify and Plan for Legacy Systems and Other Challenges

Passwordless deployments can be hindered by legacy systems, technical and usability concerns and under preparedness for the reliance on secondary devices. Look at the legacy applications in your tech stack, how they are used, and their current authentication methods. Will your passwordless solution integrate with them? If system updates or configurations are required for compatibility, make sure to get leadership buy-in during the planning stage.

Addressing technical and usability concerns requires capturing all unique workstream requirements and considering business-specific constraints. For example, customer-facing roles may have different constraints than back-office roles or a manufacturing floor. Users that travel may require offline authentication options.

PRO TIP: The reliance on secondary devices makes it critical to be ready with secure recovery and backup options.

3. Strategic Planning for Rollout

Thorough planning and secure process design are critical for successful rollout. Authentication is a critical path product — ensure you’re prepared. Establish timelines, set roll-out stages, and develop communication plans. Conduct a pilot test with a small group to help identify and address potential issues before roll out.

Take the roll out in stages too, beginning with a first adopter group. This approach allows for fine-tuning the system and ensuring a smoother transition for the entire organization. Ideally, the group will include both technically-minded people as well as those less comfortable with technology. Your early adopters should represent a cross-section of use cases, especially privileged users or other groups with specific security requirements. The sequence and timing of roll out will depend on your unique environment and business, but make sure senior leadership is part of the earliest stages — a top-down approach significantly helps end-user buy-in and speed passkey adoption.

Communication during all stages is critical to both educate and preempt objections. Concerns about biometric data usage, for example, can be mitigated through educational campaigns that clarify how such data is stored and protected.

PRO TIP: Consider aligning your password policy with your improved security strategy by enforcing complex passwords in line with guidance from CISA and PCI DSS 4.0 requirements. Your users will look forward to the ease of passwordless authentication.

4. Clear Communication and Guidance

Effective communication and guidance are essential for facilitating passkey adoption. Clear, concise, and user-friendly documentation can help users understand and adapt to new authentication methods. Early adopters can provide invaluable feedback to improve documentation and identify fringe use cases and outlier scenarios.

User adoption relies on awareness of the improvements passwordless authentication offers over traditional methods. The FIDO Alliance provides some helpful communication recommendations in their Design Guidelines.

Explain that you are replacing passwords with stronger, phishing-resistant authentication. Don’t get hung up on terminology – use what works best for your users. For example, one of our customers used the term “non-shareable credentials” instead of passwordless authentication or passkeys as that resonated better with their workforce.

Provide training on new login flows, highlighting speed, ease-of-use and security. Use multiple touchpoints, such as town halls, training videos, and cheat sheets. Include guidelines for troubleshooting issues like lost devices and keep stakeholders updated throughout the transition. Importantly, solicit user feedback and be prepared to adjust communication materials if needed.

Example user communication courtesy of the FIDO Alliance

5. User Onboarding and Support

Plan for supporting your users when the new system goes live. Make sure you take into account the needs of users in different time zones or those who travel frequently. Train your help desk to educate as well as troubleshoot. Ensure that support resources are readily available to address any issues that arise. Monitor KPIs like login times, call volume and ticket metrics pre vs post-implementation.

PRO TIP: Create a promotion or contest, with prizes. Use gift incentives, swag or giveaways to first or all enrollees.

6. Choose the Right Passwordless Solution

All of the previous steps depend upon you selecting the right passwordless provider for your environment, user population and use cases.  The optimal solution removes adoption obstacles, balancing hardened security with maximized convenience and quick deployment. If you’re reading this, you’ve likely decided that a solution based on FIDO Certified passkeys is the best approach, but there are a wide range of options within this category. Assess vendor offerings based on cryptography standards, biometric and device support, scalability, customer success rate and implementation timeframe. Ease of integration with existing web/IT infrastructure is critical.

💡12 Considerations for Assessing a Passkey Solution — Download the Guide

PRO TIP: Don’t forget that secondary authentication processes and situations — registration, re-registration, lost and stolen devices — must also be protected. Look at your provider’s entire set of identity security capabilities — do they provide identity proofing technologies and other critical identity security controls?

HYPR Is Your Passkey Adoption Partner

Companies need an identity security partner with expertise in change management and a solution that provides flexibility along with the controls enterprises require. HYPR has been helping companies implement passkeys and passwordless authentication for more than a decade. This includes a top U.S. bank with the largest workforce FIDO implementation in the world.

HYPR’s leading passwordless MFA solution, HYPR Authenticate, eliminates shared credentials while providing a friction-free user experience. It offers a range of authenticator options, including our award-winning passwordless app, and works everywhere, whether in-office or remote, online or off.  

HYPR Authenticate is the foundation of our Identity Assurance Platform, which combiness phishing-resistant authentication, adaptive risk mitigation, and automated identity proofing and verification to secure the entire identity lifecycle. HYPR Integrates with your current systems, IdPs, SSOs and applications to unify authentication across the business. 

To find out how HYPR can help your organization go passwordless, painlessly, get in touch with our team.


Anonym

Here’s How Credit Unions and Banks Can Save 20,000 Staff Minutes a Month

Credit unions and banks can save a massive 20,000 minutes a month – which translates to about 4–5 staff members’ time – by implementing a single data privacy solution.  That’s the startling message our Anonyome Labs’ sale team had for popular credit union talk show host, Mike Lawson, at the credit union advocacy conference, GAC […] The post Here’s How Credit Unions and Banks Can Save 20,000

Credit unions and banks can save a massive 20,000 minutes a month – which translates to about 4–5 staff members’ time – by implementing a single data privacy solution. 

That’s the startling message our Anonyome Labs’ sale team had for popular credit union talk show host, Mike Lawson, at the credit union advocacy conference, GAC 2024, in Washington D.C. earlier this year. 

Mike’s CU Broadcast welcomed us on to discuss Anonyome Labs’ revolutionary identity verification solution, reusable credentials, and why it’s now so important that financial institutions embrace this highly innovative technology to keep member data safe and savvy consumers happy. 

Listen to the CU Broadcast episode 

We had 6 a-ha moments for Mike during the CU Broadcast episode  There’s a gap in the Know Your Customer (KYC) process that we can fix right now. Current onboarding and KYC requirements demand loads of personal information from new customers, which takes a long time to process and is at risk of data breach. What’s more, many credit unions are still manually processing onboarding data, which causes friction, turns off time-poor and tech-savvy consumers, and is open to fraud. Some key problems here are that 80% of us don’t know where our local branch is anymore; every transaction with a credit union requires the member to hand over different pieces of their personal information (e.g. mother’s maiden name, drive license etc.); data breaches are rampant; and consumers are increasingly questioning why companies need so much of their personal information to access services.  
  Anonyome Labs’ market-leading reusable credentials can solve these problems effortlessly. The new technology replaces disparate, traditional processes with a single cryptographically protected digital ID that is persistent, irrefutable, and customer-controlled. With a reusable credential, the customer only has to verify themself once, and the same credential ecosystem creates proof of identity for any of their interactions with the credit union. Reusable credentials leverage groundbreaking decentralized identity and blockchain technology to secure the information and, conveniently, the customer stores their credential on their mobile device. Nothing could be simpler.  
  Reusable credentials save financial institutions about 20,000 minutes a month in staff time. This is a big one! Instead of all that double handling of data and slow manual processes, a reusable credential streamlines the member’s experience in today’s fast-paced and data-driven environment, potentially shaving a minute-and-a-half off each member’s verification time and saving around 20,000 staff minutes a month. The time savings add up to the equivalent of about four or five, even six, staff members that a credit union wouldn’t have to hire, which is significant, especially now when staffing is a pain point. 
  It’s crucial that credit unions realize the power they have in member data. As host Mike Lawson pointed out in the episode, “Credit unions have more data on their members than Amazon has on their customers!” He also noted that credit unions list data protection as one of their top 3 concerns. We agree, which is why Anonyome Labs’ solution can be very beneficial for the credit union because we can reduce fraud and optimize onboarding time. It’s a cost saving, it’s fraud prevention, and it’s safer. It’s an absolute win-win across the board.  
  Most credit unions want to make an impact or change, but they don’t know where to start. We say: Start with onboarding! Customers are now in a world of instant gratification. A clunky onboarding experience will lose you customers. Think about college students opening accounts for the first time. They’re a key audience, onboarding is their first impression of the financial institution, and they have little or zero tolerance for friction. Reusable credentials are the answer. And once you have improved your verification processes, we recommend looking next at optimizing loan processes.  
  This technology might sound new to banking, but Anonyome Labs has been pioneering in the area for 10 years. Decentralized identity (the technology underpinning reusable credentials) sounds complex, but it’s really just about giving the customer control of their information so they get to decide what they share and with whom. We have about 20 patents around this technology. In the ep., Mike wrapped things up by observing: “Anonyome Labs is striking the balance between security and convenience.” We agree! 

Thanks Mike Lawson for having us on your couch during the recent GAC 2024—the biggest credit union advocacy event of the year, hosted by America’s Credit Unions! Our sales team enjoyed walking the floor and meeting folks at this important industry event.   

Anonyome Labs is the leader in proactive identity protection technologies. From verifiable credentials to VPNs and encrypted communications, we leverage our cryptography and blockchain technology expertise to take data privacy and security to the next level. Check out our podcast, Privacy Files, to hear what your peers and experts are saying about the state of member and consumer privacy in real time. 

If you’d like to get started with reusable credentials or other privacy and security solutions, get in touch today! 

The post Here’s How Credit Unions and Banks Can Save 20,000 Staff Minutes a Month appeared first on Anonyome Labs.


Trinsic Podcast: Future of ID

Kim Hamilton Duffy - From Learning Machine to DIF and the Evolution of Decentralized Identity

In this episode, I talk with Kim Hamilton Duffy, the Executive Director of the Decentralized Identity Foundation (DIF). Before her work at DIF, Kim served as the CTO at Learning Machine, an early pioneer in the self-sovereign identity space that was acquired in 2020. We cover a range of topics, including: - The early days at Learning Machine and how they acquired their first customers - The mess

In this episode, I talk with Kim Hamilton Duffy, the Executive Director of the Decentralized Identity Foundation (DIF). Before her work at DIF, Kim served as the CTO at Learning Machine, an early pioneer in the self-sovereign identity space that was acquired in 2020.

We cover a range of topics, including:

- The early days at Learning Machine and how they acquired their first customers
- The messaging strategies that resonated and the unexpected moves that set them apart, like making it easy for customers to leave
- How adoption exceeded expectations at Learning Machine and how that compares to the current decentralized identity landscape

Kim offers deep insights from her extensive experience in the digital identity ecosystem, making this a conversation you won't want to miss!

You can learn more about DIF on their website: identity.foundation.

Subscribe to our weekly newsletter for more announcements related to the future of identity at trinsic.id/podcast

Reach out to Riley (@rileyphughes) and Trinsic (@trinsic_id) on Twitter. We’d love to hear from you.


KuppingerCole

Sep 11, 2024: A Glimpse into the 2024 IGA Market Landscape

The IGA market continues to grow, and although at a mature technical stage, it continues to evolve in the areas of intelligence and automation. Today, there still are some organizations either looking at replacements of UAP and ILM or IAG, but most are opting for a comprehensive IGA solution that simplifies deployment and operations and to tackle risks originating from inefficient access governance
The IGA market continues to grow, and although at a mature technical stage, it continues to evolve in the areas of intelligence and automation. Today, there still are some organizations either looking at replacements of UAP and ILM or IAG, but most are opting for a comprehensive IGA solution that simplifies deployment and operations and to tackle risks originating from inefficient access governance features. The level of identity and access intelligence has become a key differentiator between IGA product solutions. Automation is still the key trend in IGA to reduce management workload by automating tasks, providing recommendations, and improving operational efficiency.

Tuesday, 13. August 2024

KuppingerCole

The State of the CIAM Market

The CIAM market continues to grow and change. There have been major acquisitions in this space, and new vendors are launching products and services. Security is always a driver, but deploying organizations want useful data to improve marketing effectiveness and increase revenues. New privacy regulations put more requirements for information collection and handling on customer organizations. CIAM s

The CIAM market continues to grow and change. There have been major acquisitions in this space, and new vendors are launching products and services. Security is always a driver, but deploying organizations want useful data to improve marketing effectiveness and increase revenues. New privacy regulations put more requirements for information collection and handling on customer organizations. CIAM systems must also be able to integrate with other IT, security, and enterprise IAM solutions. To capture market share, CIAM vendors have to be innovative. Fraud prevention and integrations with marketing tools are differentiators that many companies are looking for in CIAM.

John Tolbert, Director of Cybersecurity Research at KuppingerCole, has been covering the CIAM market for nearly a decade. In this webinar, he'll discuss the business requirements commonly submitted for CIAM RFPs, the current state-of-the-art in CIAM, and the innovative features that leading edge solutions offer. He will describe our Leadership Compass methodology and process, and show some high-level results from the report which just published this summer.




Indicio

Why you should vote for the Digital Farm Wallet in the SuperNova Awards

The post Why you should vote for the Digital Farm Wallet in the SuperNova Awards appeared first on Indicio.
Trust Alliance New Zealand (TANZ) is a finalist in the SuperNova Awards for its transformative use of decentralized identity in agriculture. TANZ, co funded by the Ministry for Primary Industries New Zealand Government, created a pilot decentralized ecosystem for farmers to understand how to share trusted data on regulatory compliance around emissions and environmental sustainability — and its so successful, it’s being scaled to cover the entire agricultural sector. Here’s why the project deserves your vote.

By James Schulte

The project

Farming is a data-intensive business from animal welfare and food safety to greenhouse gas emissions and soil health, all of which is tied to market access, regulatory compliance, and consumer confidence.

In line with the government of New Zealand’s introduction of a Digital Identity Services Trust Framework, Trust Alliance New Zealand (TANZ), a non-profit, member-driven, farming industry consortium, conceived and built a pilot digital farm wallet and decentralized ecosystem (with partners Indicio and Anonyme labs) for the country’s primary sector.

The Digital Farm Wallet pilot project was launched in January 2023 to provide farmers and other relevant parties with a secure, permissioned way to capture and share data while preserving privacy. The focus, initially, was on regulatory compliance and simplifying that burden.

But a key goal was to create the digital infrastructure to transform “brand” promises — origin, welfare, environmental compliance — into transparent proofs that consumers can trust, which is vital for New Zealand’s export market, and a transformative use decentralized identity technology for global agriculture.

The result

The initial project quickly expanded to go beyond TANZ members to include farming organizations, regulators, and banks and had over 200 active participants. Each wallet had four to six credentials farmers could create and use, including farm ID, greenhouse gas emissions, nitrogen emissions, and geospatial farm boundaries, which  could then be submitted to relevant stakeholders, for example, regional councils, banks and/or processors.

The challenge was to bring so many competing stakeholders together to collaborate around and adopt a new and unfamiliar technology. This was overcome by education and, more importantly, by the nature of the technology itself and the benefits it quickly delivered.

Specifically, farmers were able to realize tangible savings in time and money by simplifying and streamlining compliance requirements. Paper documentation that often had to be submitted up to seven times was transformed into a simple verifiable credential presentation.

Second, the technology gave farmers control over this data and how they shared it. This gave them confidence that the technology wasn’t another app where their data was aggregated and managed by a third party. This was vital to fostering collaboration with competitors in the ecosystem.

“The project vastly exceeded expectations,” said Sharon Lyon, Project Manager at TANZ. “We set out to build a pilot digital wallet to cater to the farmers, and ended up creating a verifiable credential ecosystem focused on the relying parties. We realized that the value to farmers in the project comes from the parties that need the farm data, and the farmer being able to give the data in a trusted and permissioned way.

“Once the credentials were available, relying parties were onboarded into the pilot.. Being able to quickly share data about their goods or emissions to these key relying parties provided a huge benefit to the farmers, saving them time, creating better connections between them and their customers, and reducing the amount of effort they have to spend filling out the same forms multiple times. So building a decentralized ecosystem for the sharing of digital proof points, or credentials, and not just a digital wallet, soon became our focus.”

The Digital Farm Wallet is in the process of being scaled to the entire New Zealand agricultural sector and with expanded functionality.

If you would like to learn more about the project you can watch a recent discussion Indicio hosted on the Digital Farm Wallet here.

How to vote

Voting is live from August 5 to August 30 on the Constellation website and should take less than a minute. Please consider taking a moment to recognize all of the farmer’s lives that TANZ has improved, and the groundbreaking work put into this project.

####

Sign up to our newsletter to stay up to date with the latest from Indicio and the decentralized identity community

The post Why you should vote for the Digital Farm Wallet in the SuperNova Awards appeared first on Indicio.


Elliptic

Ensuring sanctions compliance for stablecoins with Ecosystem Monitoring

Complying with rules and regulations around financial and economic sanctions is one of the most challenging issues facing the cryptoasset space. 

Complying with rules and regulations around financial and economic sanctions is one of the most challenging issues facing the cryptoasset space. 


KuppingerCole

ARCON drut. Robotics GRC and Process Automation Platform

by Warwick Ashford This KuppingerCole Executive View report looks at the challenges of achieving effective governance, risk, and compliance (GRC) in the increasingly complex and dynamic digital environment. It examines the benefits of automation and includes a technical review of ARCON’s drut. robotics-based GRC and process automation platform.

by Warwick Ashford

This KuppingerCole Executive View report looks at the challenges of achieving effective governance, risk, and compliance (GRC) in the increasingly complex and dynamic digital environment. It examines the benefits of automation and includes a technical review of ARCON’s drut. robotics-based GRC and process automation platform.

Finema

This Month in Digital Identity — August Edition

This Month in Digital Identity — August Edition Welcome to the August edition of our monthly digital identity segment! This month, we’re diving deep into pivotal advancements and strategies that are shaping the future of digital identity. Here’s an in-depth look at the key topics we’re covering: Enhancing Digital Identity Adoption 🌍 Our first article focuses on the EBSI-CAN project me
This Month in Digital Identity — August Edition

Welcome to the August edition of our monthly digital identity segment! This month, we’re diving deep into pivotal advancements and strategies that are shaping the future of digital identity. Here’s an in-depth look at the key topics we’re covering:

Enhancing Digital Identity Adoption

🌍 Our first article focuses on the EBSI-CAN project meeting, a landmark event in advancing digital identity adoption and cross-border interoperability between the EU and Canada. This meeting was crucial in addressing the complex challenges faced by international digital identity systems. It explored various technical barriers that currently impede seamless integration of digital identity systems across borders, such as differing standards and protocols. Regulatory alignment was another key focus, with discussions centered on harmonizing regulations to facilitate smoother interactions and exchanges of digital identity information between regions. Collaborative frameworks were also highlighted as essential for fostering international partnerships and creating a unified approach to digital identity. By tackling these issues, the EBSI-CAN project aims to build a more cohesive and efficient digital identity ecosystem that supports global digital transactions and interactions. This initiative represents a significant step toward overcoming the fragmentation in digital identity systems and achieving a more integrated global digital landscape.

Advancing Decentralized Identity

🔒 Our second feature delves into the exciting progress being made in the decentralized identity sphere, particularly the integration of OpenID’s verifiable credential protocols with DIDComm. This development marks a significant leap forward in enhancing digital identity management. OpenID’s verifiable credentials provide a robust framework for issuing and verifying digital identity information, while DIDComm enables secure, direct communication between trusted parties. The integration of these technologies facilitates a more secure and efficient exchange of identity information, supporting self-sovereign identity systems where users have greater control over their personal data. This advancement not only improves the reliability of digital identity exchanges but also enhances privacy by ensuring that personal information is only shared with trusted entities under secure conditions. The combination of OpenID and DIDComm represents a major stride toward a more user-centric and resilient digital identity infrastructure, paving the way for more secure and flexible identity management solutions.

Balancing Privacy, Security, and Convenience

🔐 In our third article, we explore the ongoing evolution of digital identity with a focus on balancing privacy, security, and convenience. As digital identity systems become more advanced, decentralized solutions are emerging as a promising way to enhance user control over personal data. These systems offer significant advantages over traditional centralized models by providing greater transparency and control to users. Our article examines how these decentralized systems address common concerns related to privacy and security while still delivering a high level of convenience. It discusses the technological innovations that are reshaping personal data management, including new methods for protecting user data and ensuring secure interactions with digital services. By exploring these advancements, the article provides insights into how future digital identity solutions might evolve to meet both user expectations and regulatory requirements, ultimately leading to a more balanced and user-friendly digital identity landscape.

The Strategic Advantage of Open Working Practices

💼 Our final feature in this edition discusses the strategic benefits of adopting open working practices. Open working practices, characterized by transparency, inclusivity, adaptability, collaboration, and community, offer organizations a powerful approach to enhancing their operations. The article explores how these principles can lead to greater organizational agility by breaking down traditional barriers and fostering a culture of open communication and collective problem-solving. It highlights how open working practices can drive innovation by encouraging diverse perspectives and ideas, leading to more creative and effective solutions. Additionally, the article examines how these practices can improve employee engagement and satisfaction by creating a more inclusive and supportive work environment. By embracing open working principles, organizations can achieve sustainable success and strengthen their performance in a rapidly changing business landscape.

We look forward to bringing you more insightful updates as we continue to explore the latest trends and innovations in the field of digital identity. Stay tuned for future editions of our monthly segment!

This Month in Digital Identity — August Edition was originally published in Finema on Medium, where people are continuing the conversation by highlighting and responding to this story.

Monday, 12. August 2024

IDnow

Beyond the regulatory tick box: Exploring the benefits of KYC.

New IDnow ebook unpacks the importance of KYC and how it can be used as a competitive differentiator. In today’s online world, it’s hard to know who to trust. Digitalization and globalization have resulted in significant business challenges, such as increasing risks of fraud and identity theft, especially in the banking sector. Verifying prospective customers […]
New IDnow ebook unpacks the importance of KYC and how it can be used as a competitive differentiator.

In today’s online world, it’s hard to know who to trust. Digitalization and globalization have resulted in significant business challenges, such as increasing risks of fraud and identity theft, especially in the banking sector. Verifying prospective customers before they become users has therefore never been more important. 

For this reason, the Know Your Customer (KYC) process has become an integral step in securing financial transactions. Although a common compliance ‘tick-box’ requirement, some KYC processes can be overly complicated and lack transparency, which can lead to customer abandonment during onboarding. To set up a KYC customer journey that works for the business and the customer, it’s important to understand the role of KYC, how it works and how it can be used as a competitive differentiator. 

Click below to check out our latest ebook, ‘Building trust through KYC in banking’.

Building trust through KYC in banking. How can you set up a KYC process that satisfies your customers and meets regulatory requirements? Download now to discover: What is KYC? The importance of KYC in the banking sector Regulatory impact on KYC processes Read now The importance of KYC in banking.

The KYC process is crucial in all situations where customers are involved in financial activities. Verifying a new customer’s identity and assessing potential risks helps banks establish trust in a customer profile, allows the bank to understand the nature of customer activities and provides protection from losses and fraud. Money laundering, in particular, remains a global problem that requires rigorous measures to combat effectively. 

According to the United Nations, money laundering accounts for 2-5% of global GDP (about US$800 billion to US$2 trillion) and banks have a major role in protecting against it. Criminal activity in this sector can affect the financial institution involved, customers, and wider markets and economies. Identity fraud can also cause serious financial harm. For example, in the United States, $16.1 billion in losses was attributed to identity theft in 2021. 

The days of visiting a bank to inquire about services or make a transaction are quickly coming to an end. Customers are now unwilling to visit brick-and-mortar bank branches, even if they could. In the UK, almost three-fifths of its bank network have closed since 2015. Numbers that are reflected elsewhere in the world.

UK: 86% of adults use online banking or remote banking.  Germany: 84% use online or mobile banking to carry out essential bank transactions.  France: 96% of people actively use their online banking services.

“KYC needn’t be seen as a tick-box exercise that must be performed. The banking sector should see KYC as a valuable competitive differentiator; to not only reassure new customers that you take their business seriously, but existing customers that your bank is a safe and secure place to transact,” said Rayissa Armata, Director of Global Regulatory Affairs at IDnow.

Offering a safe and secure KYC process doesn’t mean it needs to be slow and cumbersome, it can be intuitive and be customized according to customer preference. In 2024 and beyond, as industries undergo their digital transformation, KYC will continue to become even more important.

Rayissa Armata, Director of Global Regulatory Affairs at IDnow.
The regulatory impact on KYC processes.

KYC processes have evolved significantly over the last decade thanks largely to a dynamic regulatory framework. These developments were mainly initiated at the European level and then transposed to the national level. AML laws have also gradually imposed standards applicable to KYC. There are six versions of the Anti-Money Laundering Directive, each of which was developed and released in response to political and societal issues surrounding money laundering and the latest fraud techniques.

While the banking sector and insurance companies are the main industries that are required to perform KYC, so-called non-financial companies are also included. For example, gambling platforms, real estate agents, art dealers, cryptocurrency platforms and sellers of luxury jewelry and precious metals.

Although not specifically designed for KYC, it is also important to consider the General Data Protection Regulation (GDPR) restrictions and requirements. This directly influences the way customer data is managed and requires companies to ensure that personal information collected for KYC is handled in accordance with the principles of privacy and data security.

Companies that do not comply with a country’s KYC obligations not only risk reputational harm and the potential loss of licenses but are also subject to heavy penalties from their national supervisory authority.

The importance of customer engagement, experience and expectations.

While it is mandatory to comply with regulations, the customer experience should never be taken for granted. The old saying “the customer is king” remains true, especially when banks move services online.  

Organizations with effective customer experience see an increase of 92% in customer loyalty. In this regard, customers can be a major driving force behind a bank’s success or failure. There are various things to consider when designing the ideal KYC customer journey. 

Firstly, are customer expectations. Proficiency and experience with different identification methods varies by country. French residents are more accustomed to online identification than Italian residents, for example. It is also important to consider customer service availability. In some southern European countries, users may be active at night and want to sign up and transact at that time, while further north, it’s more likely to be earlier in the day. This is why it is necessary to provide a real-time 24/7 service, accessible at any time and from any location.  

Customer engagement is also very important. As the public are already used to fast and frictionless buying processes and hyper-personalized interactions in other areas of their digital daily life, they expect the same from their bank. 

Users expect to be able to sign up for a product or service quickly, and delays in this area may make them lose interest. As there is now a wealth of choice for customers and bank loyalty may no longer be rewarded, the onboarding and verification process is a vital opportunity that can lead to higher conversion rates.

The benefits of KYC in banking.

The main goal of the KYC process is to prevent criminal activity. This helps to protect the bank, its customers and the wider financial markets from fraud and other financial crime. This goes some way to explaining why regulations are so strict.  

However, there are other reasons to invest and comply with KYC, including: 

Cost efficient: Good KYC processes can help businesses increase their conversion rates and reduce the costs of manual processing.  Improve the customer experience: When properly implemented, the KYC process helps avoid friction between the company and the user by granting instant access after verification.  Build trust in the organization: While the checks and requirements can be onerous, customers want to see that their bank is taking the issue seriously. Compliance establishes credibility.  Meet legal requirements: As complying with regulations is a legal responsibility, non-compliance can result in hefty fines and lawsuits. Apart from the monetary impact, non-compliance can also damage the company’s reputation. How IDnow helps banks comply with KYC.

From ID document scans to full identity checks, IDnow offers a range of automated services, including AutoIdent and IDCheck.io, to ensure a smooth and instant customer experience. User onboarding is automated, fraud is detected, and services are fully compliant with KYC and AML/CFT standards. 

Document capture: Our web or mobile SDK enables high-quality dynamic scanning for ID document capture, while providing excellent user experience.  Biometric capture: Our biometric tools enable users to take a selfie or facial recognition video to verify the identity of document holders.  Automated and/or manual data verification: Our fully automated document verification API extracts and verifies data in less than 12 seconds. In addition, a team of fraud experts can check documents manually.

By

Jody Houton
Senior Content Manager at IDnow
Connect with Jody on LinkedIn


Verida

Verida Community Update — Verida.ai Launch and Network Explorer

📢 Verida Community Update — Verida.ai Launch and Network Explorer Hello, gm everyone! Chris Were here, the CEO and Co-founder at Verida 👋 This latest update covers new developments and releases from the past two weeks and a sneak preview of the upcoming Private Data Bridge developer tools. Transcript: Welcome to my latest community update for the Verida Network. Thank you for tuning
📢 Verida Community Update — Verida.ai Launch and Network Explorer Hello, gm everyone!

Chris Were here, the CEO and Co-founder at Verida 👋

This latest update covers new developments and releases from the past two weeks and a sneak preview of the upcoming Private Data Bridge developer tools.

Transcript:

Welcome to my latest community update for the Verida Network. Thank you for tuning in. It’s the 12th of August 2024, and there’s a little bit to cover today from what we’ve been working on and what we’ve released over the last two weeks.

Verida Network Explorer

So we’ll start with the Verida Network Explorer. This was recently announced. We’ve got a new refresh here, with a new layout.

We now have some nice graphs of the identities that have been created on the network, and you can now actually browse and navigate through the different DIDs and identities that were created on the network. We had a really interesting design decision. Obviously, we’re a privacy-first network, and we support private, encrypted data, but we also support public profiles and public data, so people can optionally make information public. So we had a bit of a question here: do we show that public information here on the Network Explorer, or even though it is public, should we not show it and not make it as accessible? Even though the information is public on the network, we could make it a little bit more hidden because we do make it clear that information is public when you create an account in the Verida Wallet. We did decide to make that visible. I’m interested in people’s feedback and thoughts on that. Feel free to post in the comments or reply to this thread if you’ve got a different take. Privacy is a really interesting problem, and there are pros and cons for both approaches, but we do have the Network Explorer release now. You can actually click through and have a look at the nodes that are on the network.

This is currently the mainnet Explorer, which is explorer.verida.network. We also have a testnet Explorer, which you can find a link to in our official announcement. And we will continue to expand the number of nodes that are available and expand the information that’s actually available about these nodes as we continue to expand the interfaces and the tools that we have around the Verida Network.

Verida.ai Launches!

Now, the big news that we have is we announced the Verida AI landing page and website, which is super exciting. As you’ve probably been following, we believe that the ability to own all of your data and then connect that to AI is a really powerful use case, and it’s going to be something that, in the coming months and years, we’re all going to expect: the ability to have AI that knows everything about us. But it’s super important that if we have these types of tools, they are privacy-preserving, and it’s only me that has access to this information — it’s not the big tech companies or other third parties. It’s just you with your private key, and you’re the only one that can talk to a private AI agent. So this landing page is really the start of that journey at Verida.

We are working with an ecosystem of partners to build out different assistants for different purposes. We’re actually building a showcase, an example assistant using your data. That is just a good starting point that developers can fork and use to build their own assistants. But we have partners that are building really advanced and interesting products that we’re going to connect into and allow your user data that you connect into Verida to connect into these other projects. And that’s a really important part of what we’re doing because this personal AI, this private AI space, is really emerging. There’s a lot of R&D that needs to happen. There are lots of different ways of tackling these problems. We want to partner with the best teams in the world that are tackling those problems and really help provide the infrastructure for your data to connect to those assistants, and also help provide the private database, storage, and private computation that’s needed to protect your data when it’s running with these different types of agents. So if you’re interested in this space, I really encourage you to visit verida.ai, click on the “Become an Early Adopter,” put your email address in, and follow our newsletter. We’d love to get more insight and feedback from you. So please fill out the form that you receive once you subscribe. Not only will you get early access to some of the projects that we’ve partnered with and some of the AI assistants that we’re building to showcase, but you’ll also be able to keep up to date with the latest news, specifically about private AI, AI that’s built using your data, and what’s happening in that space.

If you haven’t already, check it out. We’ve got a mockup of what this is actually going to look like: the ability to talk to an AI, different types of assistants for different purposes, connecting different types of data to the AI. Obviously, a chat interface is what we’re used to when you’re talking to large language models. We do have projects we’re talking to that have more animated avatars that you talk to. So while this is just the start of an interface, we actually expect Verida data to connect to lots of different types of AI products and services that have different ways of interacting with them. That’s super exciting, and hopefully, we can share more about some of those upcoming partnerships in due course. As we touched on, the ability to connect your data is super important, and this is really where Verida has focused a lot of our effort. We’ve been building out the Verida Private Data Bridge, which is the underlying infrastructure that allows the ability to connect all these different connections and allows you to bring your data into the Verida Network and then connect your data in a secure way to these different AI agents and tools. So if you’re a builder and you’re interested in building in this space, maybe you’re interested in building an AI agent that’s using data from users, please come and get in contact with us. Connect with us on Twitter or Discord. We can hopefully give you some early access to some APIs and some tools to allow you to start building sooner rather than later.

Verida Private Data Bridge

We did announce the Private Data Bridge. This screenshot shows an interface to the Private Data Bridge. We’ve been doing a lot of market research, talking to a lot of C-level executives, partners, and other interested parties. And it’s become very clear that the ability to have an AI agent that has access to your personal data is valuable, but equally as powerful is if it has access to all of your business information — your knowledge bases, your work email, your Slack, your Telegram, particularly if you’re in crypto, access to your Google Drive. And so part of what we’re doing is actually changing our language a little bit. So moving forward, instead of referring to it as the Personal Data Bridge, we’re actually going to start referring to it as the Private Data Bridge because that’s more encompassing of both personal data and business or organizational type data. In terms of technology, we’re not changing much; we still enable the same capabilities. So Private Data Bridge makes a lot more sense.

As you know, we are building in the open. So here’s a little sneak preview of the developer interface for the Private Data Bridge. This is not what end users will use, but this is what developers can use to talk to user data. As a developer, you’ll be able to connect your own data, and you can see the different connections that you’ve made. Obviously, we support the ability here to connect to Google accounts, and from this interface, you can easily sync your data or disconnect that source. If you click “Show Logs,” it actually opens up a little modal window, and it shows all of the current activity that’s happening when data is synchronizing, so you can get an insight into what’s happening. This is super helpful as a developer if you’re building a new connection. So we support a number of different connections. We’ll support more than this at launch, but these are the ones that we’ve been working on so far. So as a developer, you’ll be able to come in here, write some code to create a new connection, and use this interface to interact with your connector.

As a developer, we’ll also add to this API documentation some API tools so you can easily write apps or AI agents that use user data. And as an example of that, we’ve got a very basic interface here where you can search the data that you have as a user and browse that in a very simple way. You can filter it and sort it.

You can look at different types of data. So again, as a developer, you can actually look under the hood, and you can see all the synchronization logs of connections. You can actually look at, you know, if you connect your Gmail, you can actually look at the raw emails that have been imported and synchronized, or social media posts and things like that. So this is just a basic interface, but this is what we’ve built so far for developers to help them integrate. And if you start building with the Verida AI technology stack, you’ll have access to all of these types of tools.

This is a work in progress, obviously, but the key thing here is that we have a window into your own data, and we’re using this for testing purposes and making this available to developers in the coming weeks, which is really exciting. So yeah, there’s a preview of what we’re doing with the Private Data Bridge and Private Data Connections.

And we really look forward to getting everyone that wants to build access and letting them have a play.

Reach out if you’re an AI builder

So yeah, that’s it for me in this fortnightly update. There’s a lot happening, as you can tell, behind the scenes on both the Private Data Bridge development and also the Verida AI tooling and showcase. So really looking forward to being able to present some really exciting things for you in the next update in a couple of weeks. In the meantime, keep your eye out. The light paper will be released very shortly, and we have a few other partnership announcements coming up as well. So thanks for tuning in. And as I said, if you’re interested in building in this space, building AI agents using user data, please reach out to us. We’d love to support you and get you early access.

📢 Verida Community Update — Verida.ai Launch and Network Explorer was originally published in Verida on Medium, where people are continuing the conversation by highlighting and responding to this story.


Ocean Protocol

Season 4 of the Ocean Zealy Community Campaign!

We’re happy to announce Season 4 of the Ocean Zealy Community Campaign, an initiative that has brought together our vibrant community and rewarded the most active and engaged members. 💰 Reward Pool 3,000 Ocean Tokens ($FET) that will be rewarded to the Top50 users in our leaderboard 🚀 📜Program Structure Season 4 of the Ocean Zealy Community Campaign will feature more engaging tasks a

We’re happy to announce Season 4 of the Ocean Zealy Community Campaign, an initiative that has brought together our vibrant community and rewarded the most active and engaged members.

💰 Reward Pool

3,000 Ocean Tokens ($FET) that will be rewarded to the Top50 users in our leaderboard 🚀

📜Program Structure

Season 4 of the Ocean Zealy Community Campaign will feature more engaging tasks and activities, providing participants with opportunities to earn points. From onboarding tasks to Twitter engagement and content creation, there’s something for everyone to get involved in and earn points and rewards along the way.

⏰Campaign Duration: 31st of August

🤔How Can You Participate?

Follow this link to join and earn:

https://zealy.io/cw/onceaprotocol/questboard

Season 4 of the Ocean Zealy Community Campaign! was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Indicio

Digital Travel Credentials (DTC) are leading the digital identity revolution

The post Digital Travel Credentials (DTC) are leading the digital identity revolution appeared first on Indicio.

By Trevor Butterworth

Analyst Alan Goode recently noted that “the travel industry is at the vanguard of digital identity adoption globally.”  

As a company leading the vanguard, with partner SITA, we agree. But there’s a lot to unpack here for consumers and other business sectors.

To legally cross  a border, you must have a passport; therefore, it stands to reason that crossing a border “digitally” requires a digital identity as trustworthy (or even more trustworthy) than a physical passport. We explored the idea of “government-grade” digital identity in a previous blog and how Digital Travel Credentials following standards set by the International Civil Aviation Organization (ICAO) achieved this grade by using decentralized identity technology. 

This technology changes the fundamental way we identify ourselves digitally and online, and the way we share and authenticate information.

They allow us to hold our own data in a highly protected way  They make this held data cryptographically verifiable so that it is portable and trustworthy. This eliminates the need for “identity accounts” that require logins and passwords which are at risk of being phished or faked (for example, frequent flyer programs). This in turn eliminates the need for identity accounts with personal data to be stored by third parties for verification (a security risk because the data is stored in centralized databases that are difficult to protect against data breaches) This also means that a person can hold their own biometric data, bind it to their digital identity, and have it cryptographically verifiable in a way that obviates the risk of AI deepfakes.

Centralized databases accessed through user accounts are a fundamentally weak way to manage identity and authentication to access resources. This is because they are susceptible to a single point of failure. 

Here’s a hypothetical: Imagine a business database with a million customer accounts and their account details. Imagine all but one customer — 999,999 — are hypervigilant about regularly changing passwords and clicking on suspicious SMS messages or emails. And then that one person clicks, in error, on a phishing email and inputs their account login and password. That phishing attack nets the personal details of all 999,999 user accounts.

That is the essence of data breaches and identity theft: It’s an all you can eat buffet costing billions of dollars in both losses and security. Current solutions treat the symptoms rather than the disease: multifactor authentication, passwordless, single sign on; all add complexity, expense, and friction to what is meant to be an instant process without removing the underlying problem.

And we haven’t even talked about complying with data privacy regulation.

What the travel sector has quickly realized is that decentralized identity solves all these critical identity and access management problems: Let the customer hold their data and let the portable trust created by decentralized identity do all the work. 

With government-grade verifiable identity credentials, travel can be seamless because we can authenticate this information when it is presented by customers. We don’t need to store and manage it. 

Tackling the biometric threat
Perhaps one of the most important and least commented on aspects to digital travel is that decentralized identity saves biometric systems from catastrophic risk.

Biometrics were the answer to passwords: Instead of the farce of coming up with new, complicated phrases every few months to manage your account login, use your face. Or voice. Or fingerprint. 

These became the seamless answer to password theft — until generative AI technology suddenly made biometrics easy to fake. And while you can reset a password, you can’t reset a person’s physiological characteristics. Once a person’s biometrics are stolen, how are they supposed to get them back? 

This is where verifiable credentials and decentralized identity come to the rescue. There are multiple ways to bind liveness and biometric information to an identity check such that you can be sure that I am who I claim to be. And because this biometric information can be verified cryptographically, it can be held by the traveler instead of being stored in an airline database, where it turns into a permanent privacy and security liability. 

Verifiable credentials save biometric systems.

First-class data sharing
|This is why what’s happening in travel with digital identity is showing the world the future. We have taken the toughest use case — crossing a border digitally — and solved it to the satisfaction of governments, airlines, airports, AND travelers.

The combination of people holding their own data, deciding who they want to share it with, and this data being cryptographically verifiable rewrites the entire digital landscape. With portable trust, information can go anywhere.

####

Sign up to our newsletter to stay up to date with the latest from Indicio and the decentralized identity community

The post Digital Travel Credentials (DTC) are leading the digital identity revolution appeared first on Indicio.


KuppingerCole

Sep 24, 2024: Navigating Data Challenges: Unlocking Power of Data Marketplaces

Modern enterprises face numerous data-related challenges, including siloed storage, security threats, and compliance requirements, making strategic and efficient data management essential. Navigating complex data landscapes requires ensuring data accessibility and security, while preventing unauthorized access and breaches. Robust data management strategies are key to maintaining competitive advant
Modern enterprises face numerous data-related challenges, including siloed storage, security threats, and compliance requirements, making strategic and efficient data management essential. Navigating complex data landscapes requires ensuring data accessibility and security, while preventing unauthorized access and breaches. Robust data management strategies are key to maintaining competitive advantage and operational efficiency in today's fast-paced business environment. Data marketplaces – platforms that connect data producers of specific data products with data consumers who can leverage them for their own goals and projects – are an emerging technology that can power such strategies. Join experts from KuppingerCole Analysts and Immuta as they discuss how data marketplaces address challenges in data management. They will explain how this approach can enhance data access control and internal sharing, provide a centralized platform for managing data assets, help break down silos, ensure compliance, streamline governance, improve security, and foster innovation, driving business success in a data-driven world. Alexei Balaganski, Lead Analyst at KuppingerCole Analysts, will provide an overview of the risks and challenges in managing sensitive data at the enterprise level amidst the evolving compliance landscape. He will discuss how to balance security with accessibility and productivity, offering insights on reducing data friction while meeting regulatory requirements. Bart Koek, Field CTO at Immuta, will discuss strategies for promoting efficient and compliant data sharing, present practical use cases, explore best practices from real-world implementations of data marketplaces at leading organizations, and provide an overview of Immuta’s Data Security Platform.

Sunday, 11. August 2024

KuppingerCole

Identity Security - the Epicenter of Cybersecurity

In this episode of the KuppingerCole Analyst Chat, host Matthias Reinwarth is joined by Martin Kuppinger, Principal Analyst at KuppingerCole Analysts, to discuss the evolving landscape of identity security. They explore the centrality of Identity and Access Management (IAM) in IT security, the rise of Identity Threat Detection and Response (ITDR), and the latest trends in fraud prevention. The con

In this episode of the KuppingerCole Analyst Chat, host Matthias Reinwarth is joined by Martin Kuppinger, Principal Analyst at KuppingerCole Analysts, to discuss the evolving landscape of identity security. They explore the centrality of Identity and Access Management (IAM) in IT security, the rise of Identity Threat Detection and Response (ITDR), and the latest trends in fraud prevention. The conversation delves into the use of generative AI in cyber-attacks, the importance of gamification in cybersecurity, and the anticipated advancements in ITDR solutions. Join us to gain insights into these critical areas shaping the future of cybersecurity.




Spherical Cow Consulting

IAM’s Time Problem: Why Digital Attestation Needs Work

Identity management and digital attestation are crucial for verification and authenticity. The process involves proving the integrity of data through cryptographic techniques, and it has parallels to non-digital methods like notary services. The use of electronic ledgers, cryptography, and key management are essential in ensuring secure digital attestation. However, there are challenges related to

Identity management has a time problem. Discussions in the hallways and conference calls for various identity and security standards focus on immediate, point-in-time requirements. Can this person or thing authenticate itself at the moment they need to? Are they authorized at that moment to access the system, service, or data they need to get their job done? Don’t get me wrong; those are big, important questions that need to be addressed. But sometimes, you need more. You need to dig into the past to determine responsibility for specific actions or the provenance of digital material. This area is called attestation and verification and is at least as complicated as proper authentication and authorization, especially when considered over longer time frames.

Understanding Modern Digital Attestation

Modern digital attestation is the process of proving or verifying the authenticity and integrity of a system, device, or data, often through the use of cryptographic techniques. Has the data been tampered with? Can it be trusted to be what it is supposed to be? Or have conditions changed such that you cannot immediately trust what you have?

The process of attestation has, of course, been around longer than computers. If you’ve used a notary service, you’ve encountered a non-digital attestation process. The notary verifies the identity of the person via official identity documents (such as a passport or driver’s license). They then witness the signature and provide a stamp that attests the signature is genuine, and the person has been identified, and then they record the whole transaction in a ledger.

In principle, digital attestation is much the same: an identity is verified, a credential is issued (or an existing one is used), signed, and an entry is made in an electronic ledger. In practice, however, the requirements around identity verification change based on context. Different industries, jurisdictions, and services all have different rules. (Discussing identity verification is a different blog post.) The signature attesting to the verification is where cryptographic magic comes in, and that’s where time becomes a challenge.

The Role of Electronic Ledgers in Attestation Over Time

Some people see “ledger” in the world of online data and think “blockchain.” That’s certainly one way to go about handling a ledger. Advocates argue that blockchain technologies are the One True Way to properly handle attestation over time. Everything is recorded, nothing can be deleted or changed, and everything is transparent to the entities that can access that blockchain. Of course, there are issues—such as the GDPR’s ‘Right to be Forgotten‘ that states certain data must be deleted when requested—that make using blockchain technology a bit more complicated than anyone would want (great paper about that here).

All that said, ledgers do not need to exist in any particular blockchain format. In fact, for the purposes of this discussion, the format of the ledger (beyond it being digital instead of a dusty book somewhere) doesn’t matter. What matters is that signature and the associated attestation that’s being stored.

Cryptography’s Critical Role in Digital Attestation

You can’t talk about cryptographic signatures without understanding a few salient points about cryptography. Cryptography is used to enhance the security, scalability, and manageability of systems and data, from individual devices to large-scale distributed systems. Cryptography relies on keys that encrypt data. Symmetric cryptography allows the key that encrypts the data to also decrypt the data. Asymmetric cryptography requires one key for encryption and a different key for decryption. The amount of math involved in making all this work is staggering and entirely out of my pay grade. Fortunately, there are people in the world who think developing the math for advanced cryptographic systems is amazing. Thank you for your service.

You don’t have to know the math to use the systems (whew!). But you do have to give some thought to how to manage the keys used for encryption and decryption. Securely generating, using, storing, and revoking keys is a Very Big Deal and enough to keep any IT administrator or security practitioner on their toes. While there are several models to follow, including Public Key Infrastructure (PKI), the Key Management Interoperability Protocol (KMIP), Hardware Security Modules (HSMs), and several others, when you think about those being used over the course of decades, you’ll start to see some critical weaknesses in the system.

Today’s Attestation, Tomorrow’s Cipher

As people create digital attestation and verification specifications, they focus on the technologies available today. They also presume those technologies will be available tomorrow. And they’re right. Technologies will be available tomorrow. They will probably be available next year. Ten years from now, though? Twenty? One hundred?

Now for a different consideration: if you use a model that has one key signing all the things for a few years, how hard will it be to dig through the data to find any particular signing instance? How do you identify the point in time a key might have been compromised (i.e., copied and used by an unauthorized party) and then determine all the things signed with the compromised key? Now think about this exercise for data that’s a decade old.

This isn’t a scenario that will play out with everything that includes digital attestations today. Business records are often only legally required for 5-7 years. Similarly, personal tax records also only must be stored for a limited time. But there are scenarios where the time frames required are much, much longer. In the U.S., copyright protection lasts for the lifetime of the author plus 70 years. Establishing provenance for artwork is something that can span centuries.

Standards Matter

There are efforts that are starting to poke at the edges of the problem of digital attestation and verification. One example is the Coalition for Content Provenance and Authenticity (C2PA). That’s an effort coming out of the Joint Development Foundation, a non-profit that brings together the efforts of the Content Authenticity Initiative (CAI) and Project Origin. They are focusing on the provenance of media for publishers, creators, and consumers. Another effort, coming from a different angle, is the Supply Chain Integrity, Transparency and Trust (SCITT) initiative in the IETF. Their focus is on “the ongoing verification of goods and services where the authenticity of entities, evidence, policy, and artifacts can be assured and the actions of entities can be guaranteed to be authorized, non-repudiable, immutable, and auditable.”

But in both those cases, the focus is a bit more on today and less on decades from now. This is understandable when you think about it. If you can’t solve for today, then you might not even get to next year, so focusing on immediate needs is a necessary step. Of course, that doesn’t mean you can ignore the longer term and given the state of existing efforts, the longer term is a space ready for attention.

Exploring Solutions: Hierarchical Deterministic Keys for Scalable Attestation

OK, so no, I don’t have answers, but I was definitely inspired to learn more on this topic during IETF 120. I had the best hallway conversation about the issues of time, key management, and how identity practitioners really needed to think harder about the long-term viability of the specifications under development. People were developing specifications and protocols that allowed for secure digital attestations (yay). They weren’t (aren’t) thinking about the fact that, over time, a significant percentage of the signatures will be revoked, and that has to go in the ledger as well. Long story short: ledges won’t be scalable over any length of time.

The solution to this we discussed most was Hierarchical Deterministic Keys. HDKs can be used in attestation processes to create derived keys for specific operations or time frames. This allows the system to maintain a secure and scalable method of attestation by ensuring that each key is only valid for a particular purpose or time, minimizing the risk of compromise and reducing the need for frequent key revocation. Basically, every time you wield a key, you create a derived key so you can more easily identify when that key was used. Revocation becomes less of an issue when the scope of key use is constrained. Of course, if your master key is compromised, you’re kind of doomed, but that’s the case in any key management scenario.

A Use Case: Refugees

If you’ve read this far, you probably think this is an interesting problem, but you might want a realistic example. So let’s talk about Maria.

It’s the year 2045. Maria fled her home country 20 years ago due to a conflict. She arrived in a host country, where she was granted asylum and eventually settled. It’s become home, and now she wishes to apply for citizenship. As part of the application process, she needs to prove her identity and submit a birth certificate from her country of origin.

Maria’s original birth certificate was lost during her escape, but she had a digital copy of the document stored in a digital identity wallet issued by an international organization that assists refugees. This digital birth certificate was issued with a cryptographic signature attesting to its authenticity at the time of issuance. Digital credentials ftw!

But wait. That was 20 years ago. While they used the best cryptographic techniques at the time, the quantum apocalypse happened. The agency that issued Maria’s birth certificate has had to revoke many keys used for signing documents, either due to suspected compromise or the routine expiration of cryptographic keys. Each revocation must be recorded in a ledger, which has grown significantly over time. The host country has to search through the records for millions of refugees using old credentials; not exactly a trivial exercise.

But wait, there’s more!

The digital birth certificate’s provenance must be established across multiple jurisdictions, as Maria’s host country requires confirmation from the original issuing country (which has undergone significant political and administrative changes over the years). This requires coordination between different governments, each with their own systems and cryptographic practices. 

According to the United Nations High Commissioner for Refugees (UNHCR), “By May 2024, more than 120 million people were forcibly displaced worldwide as a result of persecution, conflict, violence or human rights violations. This includes: 43.4 million refugees. 63.3 million internally displaced people.” There are many Marias in the world today, and there will only be more in the coming years.

The Data Deluge: Preparing for the Future of Digital Attestation

According to Exploding Topics, 402.74 million terabytes of data are created daily. Not all of it will be kept. Not all of it will involve digital attestations as to its authenticity. But if even 1% of that data does require digital attestations that last for at least a decade, you’re looking at 14,700.01 exabytes of data in 10 years. That’s … a lot of data.

As we’re developing specifications that allow us to do very smart things to attest to today’s data authenticity, we really need to start thinking about what that means after 10 years of new data, new signatures, data revocation, and more.

As always, I’m hoping this post will be the start of a conversation. If you have more information on the scalability of long-term attestation, please let me know!

The post IAM’s Time Problem: Why Digital Attestation Needs Work appeared first on Spherical Cow Consulting.

Friday, 09. August 2024

auth0

What Is Attribute-Based Access Control (ABAC) and How to Implement It in a Rails API?

There are different ways to implement an authorization system and the one you choose depends on your application's needs. Attribute-Based Access Control (ABAC) is just one of them, so let's go ahead and learn how to implement it in a Rails API.
There are different ways to implement an authorization system and the one you choose depends on your application's needs. Attribute-Based Access Control (ABAC) is just one of them, so let's go ahead and learn how to implement it in a Rails API.

Ocean Protocol

French Fiscal AI Innovation and Prediction Challenge: Podium Winners

Introduction The French Fiscal AI Innovation and Prediction Challenge invited data scientists from around the globe to analyze an extensive dataset encompassing 40 years of French tax information. The competition aimed to uncover patterns and trends in municipal finances, offering participants a unique opportunity to engage with a rich dataset produced in collaboration with the EU Commission’s Sc
Introduction

The French Fiscal AI Innovation and Prediction Challenge invited data scientists from around the globe to analyze an extensive dataset encompassing 40 years of French tax information. The competition aimed to uncover patterns and trends in municipal finances, offering participants a unique opportunity to engage with a rich dataset produced in collaboration with the EU Commission’s Science Hub, the Joint Research Centre. This collaboration ensured the challenge was grounded in real-world data and addressed pressing fiscal questions relevant to European policymakers.

Participants were tasked with developing predictive models, identifying correlations between population size and tax revenue, and assessing the impact of significant tax policy changes, such as eliminating the Professional Tax in 2010. The challenge required a deep dive into the data, employing advanced analytical techniques to generate actionable insights. By focusing on these objectives, the competition aimed to enhance data-driven decision-making and contribute to more effective governance of French municipalities.

The challenge highlighted the importance of leveraging AI and machine learning to interpret complex datasets and forecast future trends. Through their analyses, participants showcased their technical skills and contributed to a broader understanding of the fiscal dynamics at play within French municipalities. The insights gained from this challenge are expected to aid in formulating more informed and effective fiscal policies, ultimately benefiting the governance and financial health of municipalities across France.

Top 10 submissions “French Fiscal AI Innovation and Prediction Challenge” 1st Place: Anamaria

Anamaria’s analysis excelled in its comprehensive approach to the dataset, addressing all aspects of the challenge criteria. Anamaria identified Paris, Marseille, Toulouse, Nice, and Lyon as the top municipalities by net revenue, with Paris leading at €51.5 billion. She highlighted the significant fluctuations in revenue trends, particularly noting the impact of the Single European Market in 1993, which caused a reduction in local business tax rates, and the 2008 financial crisis, which temporarily reduced local authority revenues due to a drop in economic activities.

Her analysis showed a median growth rate of 260.4% over 20 years, with specific periods such as 2017–2022 showing a 119.5% increase. Anamaria found a strong positive correlation (0.88) between population size and tax revenue, indicating that larger municipalities generally have higher tax revenues. The abolition of the Professional Tax in 2010 led to a 71% decrease in municipal revenues, with average yearly revenues dropping from €6.8 billion pre-2010 to €1.97 billion post-2010. This change significantly impacted larger municipalities, which relied heavily on the Professional Tax.

Anamaria employed the Prophet model for forecasting, which performed well for shorter horizons (1–4 years) but faced challenges with long-term predictions (5 years), with mean absolute errors (MAE) ranging from 0.27 for 1-year predictions to 2.35 for 5-year predictions. The model captured yearly trends effectively, allowing accurate forecasts within the 95% confidence interval for shorter periods.

2nd Place: Luca

Luca’s report provided a detailed exploration of the dataset, focusing on the impact of tax reforms and the relationships between various fiscal metrics. Luca analyzed the 2009/2010 tax reform, highlighting how larger municipalities faced fiscal challenges due to their reliance on the Professional Tax. His meticulous data transformation and integration included converting French francs to euros and consolidating municipalities with similar names or codes, which was crucial for his analysis.

Luca’s visualizations showed significant revenue declines during major reforms and economic events. For instance, the revenue dropped by approximately 20% during the creation of the Single European Market in 1993 and around 15% during the 2008 financial crisis. His scatter plots revealed a strong correlation between population size and tax revenue (correlation coefficient of 0.88), suggesting that larger populations tend to generate higher tax revenues.

Luca identified FB (Foncier Bâti) as the most significant tax source for municipalities and labor unions, accounting for over 50% of total tax revenue for municipalities and 28.2% for labor unions. He used the Prophet model and conducted thorough cross-validation, achieving mean squared error (MSE) values as low as 0.0007 for short-term forecasts. His approach provided a comprehensive understanding of how tax reforms and population dynamics influence municipal revenues.

3rd Place: Ameneh

Ameneh’s analysis provided a broad perspective on the economic conditions across French municipalities, emphasizing disparities between urban and rural areas. Ameneh identified Paris, Marseille, Nice, Toulouse, and Lyon as top revenue-generating municipalities, with Paris leading significantly at €67.4 billion, followed by Marseille at €7.53 billion, Toulouse at €6.18 billion, Nice at €5.87 billion, and Lyon at €5.47 billion. Her analysis revealed consistent revenue streams for top municipalities, while rural areas like Rouvroy-Ripont and Île-Molène showed minimal revenue, highlighting economic disparities.

She categorized municipalities into consistent growth, fluctuating growth, and declining trends, with 21,940 municipalities exhibiting steady growth, 12,954 showing variable growth, and only 60 municipalities declining. Ameneh found a strong positive correlation (0.966) between population size and tax revenue, emphasizing the importance of population size in fiscal performance. FB (Foncier Bâti) was identified as the most significant tax, predominant across almost all municipalities, contributing on average €12.33 million annually.

Ameneh’s linear regression model provided high accuracy and low error rates, with an R-squared value of 0.92, effectively capturing the linear relationship between past and future revenues. Her analysis highlighted the economic disparities and critical factors influencing municipal tax revenues, providing valuable insights for policymakers.

Interesting Facts

Revenue Trends Reflect Economic Events: During significant economic events, such as the 1993 Single European Market creation and the 2008 financial crisis, municipal revenues dropped by approximately 20% and 15%, respectively. This demonstrates the sensitivity of municipal finances to broader economic shifts.

Impact of Tax Reforms: The abolition of the Professional Tax in 2010 led to a 71% decrease in municipal revenues, dropping from an average of €6.8 billion pre-2010 to €1.97 billion post-2010. Larger municipalities, heavily reliant on this tax, faced significant fiscal challenges.

Diverse Tax Revenue Sources: Foncier Bâti (FB) was identified as the most significant tax source, accounting for over 50% of municipal revenue and 28.2% for labor unions. This underscores the reliance on property taxes in municipal finance and the impact of property values on fiscal health.

Strong Correlation Between Population and Revenue: Analyses revealed a strong positive correlation between population size and tax revenue, with coefficients of 0.88 and 0.966. Larger municipalities tend to generate higher revenues, highlighting the importance of population in fiscal planning.

2024 Championship

The challenges feature a prize pool of between $10,000 and $20,000, distributed among the top 10 participants. Our championship points system distributes 100 to 200 points across the top 10 finishers in each challenge, with each point valued at $100.

2024 Championship standings prior to this challenge

By participating in challenges, contestants accumulate points toward the 2024 Championship. Last year, the top 10 champions received an extra $10 for every point they had earned.

Moreover, the top 3 participants in each challenge can collaborate directly with Ocean to develop a profitable dApp based on their algorithm. Data scientists retain their intellectual property rights while we offer assistance in monetizing their creations.

About Ocean Protocol

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data.

Follow Ocean on Twitter or Telegram to stay up to date. Chat directly with the Ocean community on Discord, or track Ocean’s progress on GitHub.

French Fiscal AI Innovation and Prediction Challenge: Podium Winners was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


uquodo

Web 3.6.0 and Mobile 3.1.3 updates

The post Web 3.6.0 and Mobile 3.1.3 updates appeared first on uqudo.

The post Web 3.6.0 and Mobile 3.1.3 updates appeared first on uqudo.

Thursday, 08. August 2024

Anonym

6 Facts About Digital Identities from One of the World’s Most-Streamed Cybersecurity Podcasts

Anonyome Labs’ CTO Dr Paul Ashley recently appeared on one of the most-streamed cybersecurity podcasts in the world, The Bid Picture with Bidemi Ologunde, to discuss some of the hottest topics in privacy and cybersecurity today.    The wide-ranging interview covered:   Key moments in the fascinating discussion include when Dr Ashley explained to Bidemi Ologunde:  […] The

Anonyome Labs’ CTO Dr Paul Ashley recently appeared on one of the most-streamed cybersecurity podcasts in the world, The Bid Picture with Bidemi Ologunde, to discuss some of the hottest topics in privacy and cybersecurity today.  
 

The wide-ranging interview covered:  Digital identities, and how Anonyome Labs has packaged them for consumers as “Sudos” in MySudo, the world’s only all-in-one privacy app, and for businesses through our decentralized identity solutions.   Surveillance capitalism and the concept that if you’re not paying for the product, you are the product, especially with companies such as Google and Meta whose main source of revenue is users’ personal data   The rapid spread of artificial intelligence and its applications for both good and evil, including in surveillance capitalism, data broking and data abuse  Privacy advice for everyday consumers to protect their personal information   The greatest privacy development of the decade – decentralized identity – and how its centerpiece – reusable credentials – are transforming the identity management space and handing consumers back control over their personal information   The urgent and ongoing need for frictionless, simple privacy tech for consumers and business, and how Anonyome Labs will continue to deliver both, building on its 10-year history pioneering in the space.   
Key moments in the fascinating discussion include when Dr Ashley explained to Bidemi Ologunde:  Sudo digital identities were inspired from proxies in cybersecurity: “We thought, how could we apply a proxy to a normal user? A Sudo is a proxy for online and offline life used for all different situations. Use your Sudo persona instead of your personal data and plastic credit cards,” Dr Ashley said.  Lots of businesses, such as plumbers and law enforcement, use MySudo to separate their professional from their personal communications: “The end-to-end encrypted functionality of MySudo is [particularly] useful for law enforcement,” Dr Ashley explained.  Some of the reasons MySudo is the world’s only all-in-one privacy app are that we don’t collect or store our users’ personally identifiable information, and the app offers disposable and customizable payment cards, phone numbers, email, and browsers all in the one app.  Parents looking to manage the risks of social media for their children should take at least these four steps, because every step is increasing their level of privacy:   Step 1: Use a VPN, such as MySudo VPN.    Step 2: Use a safe browser (MySudo has private browsers with site reputation and ad and tracker blockers built in)  Step 3: Use one of the more private search engines, such as DuckDuckGo or the new honest search engine FreeSpoke.   Step 4: Get MySudo for compartmentalization, and set up all your kids’ gaming accounts with Sudo information (never their own or your personal information).  Step 5: Use a password manager to manage and store all your different passwords (use a different password on every account).  The deeper problem with AI is that it can scan vast quantities of data and link them, identifying the user: “AI has risk of being used for surveillance capitalism but … there’s a lot of scope going forward to use AI tech constructively, such as for privacy products,” Dr Ashley said. Watch this space!  People have a lot of awareness of the need for privacy but not a lot of understanding of the technology available. “Anonyome Labs will continue to create simple products that are frictionless. One of our goals is to make the tech simple for normal users,” Dr Ashley said. One example is the MySudo browser extension which makes it easier to use Sudos on desktop.  Another aspect of the future of privacy and cybersecurity is decentralized identity or self-sovereign identity and verifiable credentials. “This important technology is giving users control of their personal data and letting the user be in the middle of any data exchange,” Dr Ashley. While DI is a big enough topic for its own episode of The Bid Picture, Dr Ashley did touch on the notion of consumers carrying reusable or verifiable credentials in an identity wallet and selectively disclosing only relevant personal information on request from services. “This is yet another tool in your privacy basket, and it’s been designed from the ground up for privacy,” Dr Ashley explained. 

 Listen to the podcast episode  
 

Anonyome Labs is the leader in proactive identity protection technologies. From verifiable credentials to VPNs and encrypted communications, we leverage our cryptography and blockchain technology expertise to take data privacy and security to the next level. Check out our podcast, Privacy Files, to hear what your peers and experts are saying about the state of member and consumer privacy in real time. 

 
The Bid Picture podcast provides an array of information about cybersecurity. It includes the latest news and facts to keep listeners up-to-date with the most current events and developments in cybersecurity.  

The post 6 Facts About Digital Identities from One of the World’s Most-Streamed Cybersecurity Podcasts appeared first on Anonyome Labs.


HYPR

HYPR and Microsoft Partner on Entra FIDO2 Provisioning APIs

Yesterday at the Black Hat conference, Microsoft announced the public preview of Entra FIDO2 provisioning APIs. HYPR worked closely with Microsoft on these critical enhancements, which make it easier for Entra customers to provision passkeys for their users. Like the EAM integration unveiled a few months ago, collaborative development of such features is essential to fuel adoption of se

Yesterday at the Black Hat conference, Microsoft announced the public preview of Entra FIDO2 provisioning APIs. HYPR worked closely with Microsoft on these critical enhancements, which make it easier for Entra customers to provision passkeys for their users. Like the EAM integration unveiled a few months ago, collaborative development of such features is essential to fuel adoption of secure, phishing-resistant authentication methods. We are honored that Microsoft named HYPR as a fully-tested vendor to help Entra customers on their FIDO2 provisioning journey.

This partnership underscores our commitment to delivering a secure and interoperable ecosystem for our customers… Their involvement has been instrumental in ensuring that the APIs are robust, versatile, and ready for real-world challenges."

– Tim Larson, Senior Product Manager on Microsoft Entra

What Are the Microsoft Entra FIDO2 Provisioning APIs?

Credential compromise is the top entry vector for attacks. Adversaries use phishing, adversary-in-the-middle (AitM), social engineering, and other tactics — increasingly aided by AI — to steal passwords and MFA tokens to log in as legitimate users. These breaches are very hard to detect until the damage is already underway. Phishing-resistant authentication based on FIDO2 standards is the single most effective way organizations can protect themselves and their users against such threats. The Microsoft Entra FIDO2 provisioning APIs encourage FIDO2 deployment and adoption by making it easier for users to enroll passkeys as an authenticator. Organizations can build their own admin provisioning clients, or work with a provider like HYPR, which leverages the new APIs.

How It Works

Using the new APIs, it’s quick and simple to provision a FIDO2 security key / passkey as a credential for Entra ID. Previously, users had to manually register their security key with Entra ID. The APIs eliminate this step, letting organizations handle the registration on behalf of their users. They work with both hardware FIDO2 keys and virtual FIDO2 security keys like HYPR.

What Does It Mean for HYPR Customers?

The new APIs further optimize the HYPR integration with Microsoft Entra ID. Leveraging their functionality streamlines provisioning of HYPR Enterprise Passkeys, making them the ideal authentication option for Microsoft Entra environments. Users simply pair their Windows workstation with HYPR and the passkey is automatically added to their Entra profile. As you can see in the below video, the entire process takes less than a minute.

Enrolling HYPR Enterprise Passkeys using the new Microsoft Entra ID FIDO2 provisioning APIs

HYPR Enterprise Passkeys

HYPR Enterprise Passkeys are Microsoft-approved and validated, FIDO Certified device-bound passkeys. They provide the assurance of a hardware key, including provenance attestation, and the convenience of a mobile authenticator app. With Enterprise Passkeys, users authenticate with a single gesture to gain access to Entra ID and all downstream apps. If they use HYPR to log into their desktop, the authenticated identity is automatically passed to Entra ID.

Enterprise Passkeys work in both fully Entra-joined and hybrid-joined environments, with multiple transport options for greater flexibility.

Learn More About HYPR and the Microsoft Entra FIDO2 Provisioning APIs

The Microsoft Entra FIDO2 Provisioning APIs are now in public preview. Read Microsoft’s technical documentation for more details about how it works. To learn more about how HYPR leverages the new APIs and HYPR Enterprise Passkeys for Entra ID, talk to our team!

 


KuppingerCole

Where Do Organizations Stand With a Comprehensive IAM Blueprint?

by Martin Kuppinger Work is still to be done to see widespread comprehensive IAM in place. In a survey run by KuppingerCole Analysts, participants reported the status of their IAM blueprint. Recent Survey Results While 40% of participants reported that they do have a comprehensive IAM blueprint in place, a large portion of participants are currently putting it in place or do not have one. 26.

by Martin Kuppinger

Work is still to be done to see widespread comprehensive IAM in place. In a survey run by KuppingerCole Analysts, participants reported the status of their IAM blueprint.

Recent Survey Results

While 40% of participants reported that they do have a comprehensive IAM blueprint in place, a large portion of participants are currently putting it in place or do not have one. 26.8% are in progress with implementing a future-ready IAM blueprint without indication of where they are in the process, and 33.1% do not have one in place.

Figure 1: Companies that have a comprehensive IAM blueprint in place; KuppingerCole Survey, August 2024, sample size 447

The Identity Fabric models a comprehensive IAM implementation

A comprehensive, future-ready IAM blueprint should follow the Identity Fabrics paradigm. An “Identity Fabric” refers to a logical infrastructure for enterprise IAM, conceived to enable access for all, from anywhere to any service while integrating advanced features.

The demands on a future-ready IAM are complex, diverse, and sometimes even conflicting. These include:

Different types of identities must be integrated quickly and securely in user-friendly flows. B2B onboarding and IAM must be facilitated in the challenging context of supply chain security. Employees (internal and external) should be able to use the devices they prefer. Secure access to working environments must be possible no matter where users and systems are located. Identities must be linked to reflect relationships within teams, companies, families, or partner organizations. Zero Trust features, such as continuously verifying access, must be included. Identities maintained in trusted organizations should be directly and reliably integrated and authorized in our IAM. Identities should be able to do business and execute payments. All relevant laws and regulations must be observed. Existing data on identities and entitlements should be applicable for analytics and artificial intelligence. All this must apply to all possible identities, beyond people, so that devices, services and networks are integrated into our next generation IAM infrastructure.

Figure 2: KuppingerCole Identity Fabric

The Identity Fabric shows the identities on the far left, the services on the far right, with capabilities required, services needed, and tools to leverage in the center. A more extensive description can be found in the 2024 Leadership Compass on Identity Fabric providers.

Today’s IAM systems meet, if at all, only a fraction of current requirements. And while organizations are moving towards more future-proof blueprints like those based off of the Identity Fabric, the current survey results suggest that there is still work to be done.

Why invest in a comprehensive IAM implementation?

There are various good reasons for organizations to invest in such a comprehensive blueprint and implement their own Identity Fabric. One is overlapping capabilities between many areas of IAM. Identity Fabrics help streamline investments and avoid unnecessary redundancies. Another is moving to a modern architecture. Identity Fabrics define such modern, future-proof architecture, including segregation of customization and orchestration of services. Another one is uniting the teams. It’s one IAM by one team, not many disparate, siloed efforts. One more to mention: Prioritization. Identity Fabrics help in prioritizing investments and analyzing the gaps.


Ontology

Self-Sovereign Identity

Empowering Digital Identity in the Modern Era In today’s digital-first world, self-sovereign identity (SSI) has emerged as a srevolutionary concept, transforming how we manage and control our digital identities. SSI empowers individuals to own and govern their online personas without relying on centralized authorities, addressing critical issues of equity, data ownership, privacy, and trust
Empowering Digital Identity in the Modern Era

In today’s digital-first world, self-sovereign identity (SSI) has emerged as a srevolutionary concept, transforming how we manage and control our digital identities. SSI empowers individuals to own and govern their online personas without relying on centralized authorities, addressing critical issues of equity, data ownership, privacy, and trust in the digital realm.

Equity and Digital Inclusion

SSI has the potential to bridge the digital divide and promote equity by providing a universal means of identity verification. This is particularly crucial for the estimated 1 billion people worldwide who lack official identification. By enabling individuals to create and manage their own digital identities, SSI can grant access to essential services, financial inclusion, and participation in the digital economy to those previously marginalized.

Data Ownership and Personal Control

A fundamental principle of SSI is that individuals should have complete ownership and control over their personal data. In traditional systems, our information is often scattered across various centralized databases, leaving us vulnerable to data breaches and unauthorized access. SSI allows users to store their data locally or in decentralized systems, granting them the power to decide what information to share and with whom.

Addressing Centralization Risks

Centralized identity systems pose significant risks to privacy and security. Data breaches in large organizations have exposed millions of individuals’ personal information. SSI mitigates these risks by eliminating single points of failure and reducing the attractiveness of centralized databases to malicious actors. By distributing identity information across a decentralized network, SSI enhances both privacy and security in the digital ecosystem.

Building Digital Trust

SSI leverages cryptographic technologies to create verifiable credentials that can be trusted without relying on a central authority. This approach enables secure and private digital interactions between individuals and organizations, fostering a more trustworthy online environment. Users can selectively disclose only the necessary information for each interaction, maintaining their privacy while still providing verifiable proof of their claims.

AI and Proof of Humanity

As artificial intelligence becomes more sophisticated, distinguishing between human and AI-generated content or interactions becomes increasingly challenging. SSI can play a crucial role in providing proof of humanity, ensuring that digital interactions are genuinely human-to-human when necessary. This has implications for combating fraud, spam, and maintaining the integrity of online communities and marketplaces.

Overcoming Implementation Challenges

While SSI offers numerous benefits, its widespread adoption faces challenges such as technical complexity, regulatory hurdles, and the need for interoperability standards. However, as awareness grows and technologies mature, SSI has the potential to revolutionize how we interact in the digital world, putting individuals back in control of their digital selves.In conclusion, self-sovereign identity represents a paradigm shift towards a more equitable, secure, and user-centric digital identity ecosystem. By addressing issues of data ownership, privacy, and trust, SSI empowers individuals and paves the way for a more inclusive and resilient digital future.

Self-Sovereign Identity was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


Civic

Tokenized Identity: Unmasking Robots and Sybils With Jeremy Dillingham, Passport.xyz

In this episode of Tokenized Identity, Titus Capilnean, our VP of Go-To-Market, speaks with Jeremy Dillingham, Passport.xyz. They explore identity use cases and the required levels of verification, bot blocking, Sybils, and KYC requirements for protocols, tokens and contracts. Jeremy is part of Passport.xyz, which was formerly Gitcoin Passport. Passport.xyz is focused on empowering digital, […]

In this episode of Tokenized Identity, Titus Capilnean, our VP of Go-To-Market, speaks with Jeremy Dillingham, Passport.xyz. They explore identity use cases and the required levels of verification, bot blocking, Sybils, and KYC requirements for protocols, tokens and contracts. Jeremy is part of Passport.xyz, which was formerly Gitcoin Passport. Passport.xyz is focused on empowering digital, […]

The post Tokenized Identity: Unmasking Robots and Sybils With Jeremy Dillingham, Passport.xyz appeared first on Civic Technologies, Inc..


Verida

Own your AI future

Secrets should be kept with those you trust, like data. Imagine an AI like ChatGPT with 100% end-to-end privacy that works for you only. One private vault, multiple data sources. Your personal data secured. Your AI trained under encryption, to know you and support you. We have built privacy preserving infrastructure for hyper personal AI experiences. To guarantee your safet

Secrets should be kept with those you trust, like data.

Imagine an AI like ChatGPT with 100% end-to-end privacy that works for you only.

One private vault, multiple data sources. Your personal data secured.

Your AI trained under encryption, to know you and support you.

We have built privacy preserving infrastructure for hyper personal AI experiences. To guarantee your safety. And autonomy.

Take the first step. Write your own story. Own your AI future.

Join the waitlist at Verida.ai

Own your AI future was originally published in Verida on Medium, where people are continuing the conversation by highlighting and responding to this story.


Elliptic

Crypto regulatory affairs: Swiss regulator publishes guidance for stablecoin issuers and banks offering guarantees

Switzerland’s financial sector watchdog has released regulatory guidance for issuers of stablecoins, and for the banks providing them guarantees against default. 

Switzerland’s financial sector watchdog has released regulatory guidance for issuers of stablecoins, and for the banks providing them guarantees against default. 


uquodo

Your guide to KYC in Oman

The post Your guide to KYC in Oman appeared first on uqudo.

The post Your guide to KYC in Oman appeared first on uqudo.


Ocean Protocol

DF101 Completes and DF102 Launches

Predictoor DF101 rewards available. DF102 runs Aug 8— Aug 15, 2024 1. Overview Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor. Data Farming Round 101(DF101) has completed. DF102 is live today, Aug 8. It concludes on August 15. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE reward
Predictoor DF101 rewards available. DF102 runs Aug 8— Aug 15, 2024 1. Overview

Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor.

Data Farming Round 101(DF101) has completed.

DF102 is live today, Aug 8. It concludes on August 15. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE rewards.

2. DF structure

The reward structure for DF102 is comprised solely of Predictoor DF rewards.

Predictoor DF: Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn.

3. How to Earn Rewards, and Claim Them

Predictoor DF: To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs.

4. Specific Parameters for DF102

Budget. Predictoor DF: 37.5K OCEAN + 20K ROSE

Networks. Predictoor DF applies to activity on Oasis Sapphire. Here is more information about Ocean deployments to networks.

Predictoor DF rewards are calculated as follows:

First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards. Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards.

Expect further evolution in DF: adding new streams and budget adjustments among streams.

Updates are always announced at the beginning of a round, if not sooner.

About Ocean, DF and Predictoor

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord. Ocean is part of the Artificial Superintelligence Alliance.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

DF101 Completes and DF102 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


PingTalk

Identity-Centric Finance Regulations - Asia-Pacific

See which financial regulations in Asia, Japan, and the Pacific have stringent identity standards, and how identity access management helps achieve compliance.

In Asia, Japan, and the Pacific (APJ), the heterogeneity of identity-centric bank and finance regulations distinguishes this region from the rest of the world. Given the many countries and diverse makeup of the region, the regulatory framework is more favorable to innovation and identity plays a more central role in driving such progress. 


Patricia Or and Vicky Cheng, Regulatory Affairs Specialist at Bloomberg, explain the distinctive nature of financial services regulations in the region, stating:

 

Wednesday, 07. August 2024

Indicio

Associate DevOps (Remote)

Work with the Director of Sales to support him in day-to-day responsibilities including... The post Associate DevOps (Remote) appeared first on Indicio.

Associate DevOps (Remote)

Job Description

We are the world’s leading verifiable data technology. We bring complete solutions that fit into an organization’s existing technology stack, delivering secure, trustworthy, verifiable information. Our Indicio Proven® flagship product removes complexity and reduces fraud. With Indicio Proven® you can build seamless processes to deliver best-in-class verifiable data products and services.

As a rapidly growing start up we need team members who can work in a fast paced environment, produce high quality work on time, work without supervision, show initiative, innovate, and be laser focused on results. You will create lasting impact and see the results of your work immediately. 

As an Associate DevOps you’ll be responsible for implementing network integrations used by our clients. As an Associate position, this role is part-time (maximum 30 hours a week) and does not offer benefits, but the goal of our associate program is to help you grow into a full-time role. 

You’ll have opportunities for mentorship, community outreach, publication and personal branding, and career and personal development. There are bi-weekly sprints, daily standups, peer programming sessions, and weekly game sessions. 

Indicio is a fully remote team (our Maryland colleagues have a co-working space) and our clients are located around the world. Working remotely requires you to be self-motivated, a demonstrated team-player, and have outstanding communication skills. 

We do not conduct live coding interviews, but we do like to talk about your favorite projects and may ask for code samples if you are shortlisted.

Responsibilities

Implement integrations requested by customers Deploy updates and fixes Build tools to reduce occurrences of errors and improve customer experience Develop software to integrate with internal back-end systems Perform root cause analysis for production errors Investigate and resolve technical issues Develop scripts to automate visualization Design procedures for system troubleshooting and maintenance

Qualifications

Good knowledge of Ruby or Python Working knowledge of databases and SQL Proficiency with the following tools is preferred, but familiarity with similar tools is just fine: Google Suite, JIRA, Git, GitHub, and Slack Working with urgency without direct supervision, taking initiative, and asking lots of questions  Strong team working skills including empathy, always assuming the best intent, kindness, collaboration, and a desire to create impact Experience or desire in working remotely in a startup environment is a plus Must live in and be legally able to work in the US; we cannot sponsor work visas at this time (required)

We highly encourage candidates of all backgrounds to apply to work with us – we recruit based on more than just official qualifications, including non-technical experience, initiative, and curiosity. 

As a Public Benefit Corporation, a women-owned business, and WSOB certified, Indicio is committed to advancing decentralized identity as a public good that enables all people to control their online identities and share their data by consent. We aim to create a welcoming, diverse, inclusive, and equitable environment for all.

Apply today!

 

The post Associate DevOps (Remote) appeared first on Indicio.


1Kosmos BlockID

What Is 3FA (Three-Factor Authentication)?

How secure are you in a world where data breaches and cyber-attacks make headlines daily? You might think you’re doing enough if you’ve already upgraded to Two-Factor Authentication (2FA). However, the cyber world and its threats are evolving—enter Three-Factor Authentication (3FA). This enhanced security protocol adds an extra layer of armor, making unauthorized access even … Continued The post

How secure are you in a world where data breaches and cyber-attacks make headlines daily? You might think you’re doing enough if you’ve already upgraded to Two-Factor Authentication (2FA). However, the cyber world and its threats are evolving—enter Three-Factor Authentication (3FA). This enhanced security protocol adds an extra layer of armor, making unauthorized access even more complex. In this comprehensive guide, we dive deep into the what, why, and how of 3FA, providing insights that can help you bolster your cybersecurity posture.

The Foundations of 3FA

What is 3FA?

Three-Factor Authentication (3FA) is a security protocol that adds an extra layer of protection on top of the traditional Two-Factor Authentication (2FA). 3FA requires users to present three identifying factors before accessing an account, app, or system.

This knowledge factor could involve something the user knows (password), something the user has (a used mobile phone or device), and something the user is (biometric data).
The concept behind 3FA is straightforward: The more the three authentication factors are involved, the harder it is for unauthorized users to gain access. It’s an example of a comprehensive approach to security that makes extra steps in the verification process more robust by reducing the chances of a breach.

The Evolution from 2FA to 3FA

Two-factor authentication (2FA) has been the industry standard for securing accounts, stolen passwords, and systems. However, as cyber threats grow in sophistication, there is an increasing need for more rigorous security measures.

3FA evolved as a response to this need, incorporating an additional layer of security beyond password, making it even more difficult for unauthorized users to access accounts.
This third layer could be various things, such as a fingerprint scan, a biometric identifier, or a behavioral pattern. It depends on the system in the security question and its security requirements. By adding this additional layer, 3FA significantly raises the bar for attackers trying to compromise a system.

Who Needs to Know About 3FA?

3FA is increasingly relevant to a broad audience. Organizations dealing with sensitive or classified information are generally considered the most obvious candidates for 3FA.

This includes governmental agencies, healthcare institutions, government agencies, and financial firms. However, any organization bolstering its cybersecurity posture can benefit from implementing 3FA.

Moreover, individual users with a heightened need for security, such as celebrities, businesses, or public figures, can also benefit from 3FA. Even the general public is beginning to appreciate more advanced security protocols as awareness of cyber threats grows.

How Does 3FA Work?

The mechanics of 3FA are a natural extension of 2FA, with the difference being the additional inherence factor of a third factor for validation. Like 2FA, the user must provide two forms of identification and a third, distinct type of identity-confirming credentials for verification.

The 3FA Process Explained

Typically, 3FA starts with the user entering a username and a password. Next, a secondary device, like a smartphone, receives a time-sensitive code.

After entering this code, the user or phone must provide a third form of identification: a fingerprint or retina scan, a voice recognition test, or some other form of biometric verification. Only after successfully passing through all three gates does the user gain access to the system or account.

Types of Factors in 3FA

The factors used in 3FA usually fall into at least one element of three categories: knowledge-based (something you know), possession factor-based (something you have), and inherence-based (something you are).

Knowledge-based factors include passwords and PINs, possession-based factors encompass mobile devices or smart cards, and possession and inherence-based elements refer to biometrics like fingerprints or iris scans.
Different combinations of these categories can be employed depending on the level of security required. It’s important to note that combining the three factors should be distinct to maximize the security benefits.

3FA Protocols and Mechanisms

Several protocols and mechanisms support the implementation of 3FA—these range from standard protocols like OAuth and OpenID to specialized options for high-security environments.

Additionally, hardware tokens and biometric fingerprint scanners might be integrated into the system for the third factor.

The appropriate protocol authentication method and mechanism selection depends on various factors, including the organization’s existing infrastructure, user needs, authentication factors, and specific security requirements.

Benefits of 3FA

 

3FA offers many advantages, making it a worthy investment for organizations seeking robust security solutions. Not only does it dramatically reduce the chances of unauthorized access, but it also aligns well with various regulatory standards.

Improved Security Posture

Undoubtedly, the most significant benefit of 3FA is its enhanced security. By requiring three distinct verification forms to authenticate themselves before accessing accounts, 3FA makes it exponentially more challenging for unauthorized users to gain access. This is particularly beneficial for organizations handling sensitive data with high stakes for identity theft.

Regulatory Compliance

Another advantage of 3FA is its alignment with various regulatory standards. For organizations that must comply with guidelines such as GDPR, HIPAA, or PCI-DSS, implementing 3FA can aid in achieving and maintaining compliance. It is a tangible demonstration of an organization’s commitment to safeguarding user data.

User Experience and Usability

While adding more steps to the login process might seem like a burden, many modern 3FA solutions are designed with user experience in mind. Biometric authentication data, for instance, can be quicker and more natural to provide than entering a complex password. As a result, the additional security layer does not necessarily come at the expense of usability.

Implementing 3FA

 

 

Technical Requirements

Implementing 3FA will inevitably require some technological adjustments. At a minimum, organizations must ensure they have the infrastructure to support this type of security measure. This could include software that supports multi-factor authentication protocols and hardware like biometric scanners or token generators.
A secure and reliable network is also essential for 3FA to function optimally. While cloud-based solutions are available, organizations must maintain network security protocols to minimize potential vulnerabilities.

Costs and Budgeting

The implementation of 3FA involves both upfront and ongoing costs. Upfront costs may include the purchase of hardware and software and the expenses related to system integration. Ongoing costs can encompass maintenance, updates, and possibly licensing fees.
Budgeting for 3FA should consider the direct costs and the potential savings from reduced security incidents. While the initial investment can be significant, the long-term benefits often justify the high level of expenditure.

Common Pitfalls and How to Avoid Them

While 3FA offers enhanced security, poor implementation can undermine its effectiveness. One common pitfall is inadequate training staff, leading to user errors that compromise safety. Proper training and awareness programs can mitigate this risk.
Another issue is over-reliance on one type of authentication or one authentication factor alone, such as using multiple biometric identifiers, which defeats the purpose of multi-factor authentication. A diversified approach using various types of authentication factors is recommended.

Potential Challenges and Criticisms The Complexity Issue

One of the criticisms of 3FA is the added complexity it introduces. Critics argue that while the system is more secure than a one-time password, it becomes more cumbersome. However, many 3FA solutions focus on improving the user experience to mitigate this issue, and the benefits of heightened security often outweigh the downsides.

Reliance on Technology

Another concern is the heavy reliance on technology, such as smartphones or other biometric authentication devices, which could malfunction or be lost.
This reliance creates a potential weak link in the security chain. To counter this, backup options and alternative authentication methods should be part of any app or comprehensive 3FA strategy.

User Acceptance and Training

As with any new system, user acceptance is often a hurdle. People generally resist change, particularly regarding technology that requires them to alter their habits. Effective training and awareness programs can go a long way in facilitating smooth adoption.

Emerging Trends in 3FA

As the digital identity landscape evolves, so too does 3FA. One emerging trend is the integration of artificial intelligence to improve the efficiency and accuracy of the authentication process. Machine learning algorithms could, for example, analyze user behavior to provide a more dynamic and secure form of authentication.
Integrating more advanced biometrics and AI offers promising avenues for 3FA’s development. Beyond facial recognition, fingerprints, and iris scans, new forms of biometric data, such as heart rate or brainwave patterns, are being explored.

Blockchain technology has also been touted as a possible element in the future of 3FA. It offers the potential for decentralized authentication methods that are not only secure but also more user-friendly. The immutable nature of blockchain records can further enhance the security aspects of 3FA transactions.

To wrap it all up, 3FA offers a heightened level of security that is becoming increasingly essential in our digitalized world. The potential applications and benefits are vast, from government agencies to everyday internet users. While implementing 3FA involves a range of logistical and technological considerations, the upside in terms of cybersecurity makes it a worthy investment. If you’re committed to taking your organization’s digital security to the next level, don’t hesitate to contact our team today.

The post What Is 3FA (Three-Factor Authentication)? appeared first on 1Kosmos.


Indicio

Decentralized digital ID providers pitch privacy, monetization features

Biometric Update The post Decentralized digital ID providers pitch privacy, monetization features appeared first on Indicio.

Microsoft Entra (Azure AD) Blog

Public preview: Microsoft Entra ID FIDO2 provisioning APIs

Today I'm excited to announce a great new way to onboard employees with admin provisioning of FIDO2 security keys (passkeys) on behalf of users.   Our customers love passkeys as a phishing-resistant method for their users, but some were concerned that registration was limited to users registering their own security keys. Today we’re announcing the new Microsoft Entra ID FIDO2 provisioning

Today I'm excited to announce a great new way to onboard employees with admin provisioning of FIDO2 security keys (passkeys) on behalf of users.

 

Our customers love passkeys as a phishing-resistant method for their users, but some were concerned that registration was limited to users registering their own security keys. Today we’re announcing the new Microsoft Entra ID FIDO2 provisioning APIs that empowers organizations to handle this provisioning for their users, providing secure and seamless authentication from day one.

 

While customers can still deploy security keys in their default configuration to their users, or allow users to bring their own security keys which requires self-service registration by a user, the APIs allow keys to be pre-provisioned for users, so users have an easier experience on first use.

 

Adopting phishing-resistant authentication is critical - attackers have increased their use of Adversary-in-the-Middle (AitM) phishing and social engineering attacks to target MFA-enabled users. Phishing-resistant authentication methods, including passkeys, certificate-based authentication (CBA), and Windows Hello for Business, are the best ways to protect from these attacks.

 

Phishing-resistant authentication is also a key requirement of Executive Order 14028 which requires phishing-resistant authentication for all agency staff, contractors, and partners.  While most federal customers use preexisting smartcard systems to achieve compliance, passkeys provide a secure alternative for their users looking for improved ways to securely sign in. With today’s release of admin provisioning, they also have a simplified onboarding process for users.

 

With the Microsoft Entra ID FIDO2 provisioning APIs organizations can build their own admin provisioning clients, or partner with one of the many leading credential management system (CMS) providers who have integrated our APIs in their offerings.

 

Tim Larson, Senior Product Manager on Microsoft Entra, will now walk you through this new capability that will help in your transition towards phishing-resistant multifactor authentication (MFA).    

 

Thanks, and please let us know your thoughts!

 

Alex Weinert

 

--

 

Hello everyone,

 

Tim here from the Microsoft Entra product management team. I’m excited to share with you our new passkey (FIDO2) provisioning capabilities in Entra ID!

 

Back in May we shared how we’re expanding passkey support in Microsoft Entra ID with the addition of device-bound passkey support in Microsoft Authenticator. As part of our commitment to provide more passkey capabilities we’ve enhanced our passkey (FIDO2) credential APIs to make onboarding security keys for users more convenient.

 

How does it work?

 

With the enhancements made to our passkey (FIDO2) credential APIs you can now request WebAuthn creation options from Entra ID and use the returned data to create and register passkey credential on behalf of a user.

 

To simplify this process, three (3) main steps are required to register a security key on behalf of a user.

 

 

 

Request creationOptions for a user: Entra ID will return the necessary data for your client to provision a passkey (FIDO2) credential. This includes information like user information, relying party, credential policy requirements, algorithms, and more. Provision the passkey (FIDO2) credential with the creationOptions: Using the creationOptions utilize a client or script which supports the Client to Authenticator Protocol (CTAP), to provision the credential. During this step you’ll need to insert a security key and set a PIN. Register the provisioned credential with Entra ID: Utilizing the output from the provisioning process, provide Entra ID with the necessary data to register the passkey (FIDO2) credential for the targeted user.

 

Build your own app or use a CMS vendor offering

 

In addition to providing the tools above, Microsoft has also collaborated with 10 leading vendors in the CMS space to integrate the new FIDO2 provisioning APIs. These vendors have rigorously tested and are fully knowledgeable in the new APIs, and are available to help you in your provisioning journey if creating your own integration isn’t something you want to do.

 

This partnership underscores our commitment to delivering a secure and interoperable ecosystem for our customers. These vendors represent a diverse range of CMS solutions, each bringing unique insights and expertise to the table. Their involvement has been instrumental in ensuring that the APIs are robust, versatile, and ready for real-world challenges.

 

As we roll out the public preview, we are proud to announce that these vendors have pledged their support, integrating the APIs into their platforms. This collaboration not only enhances the security landscape but also paves the way for seamless adoption across various industries.

 

 

 

What’s next?

 

This public preview is the next step in our passkey journey and we’re gearing up for even more passkey (FIDO2) provisioning features. We’re looking forward to building provisioning capabilities into the Entra admin center which will empower help desk and other admins the ability to directly provision FIDO2 security keys for users.

 

To learn more about everything discussed here, check out how to enable passkeys (FIDO2) for your organization and review our Microsoft Graph API documentation. Reach out to your preferred CMS provider to learn more about their integrations with the Microsoft Entra ID FIDO2 Provisioning APIs.

 

Thanks,

Tim Larson

 

 

Read more on this topic 

Public preview: Expanding passkey support in Microsoft Entra ID - Microsoft Community Hub

 

Learn more about Microsoft Entra  

Prevent identity attacks, ensure least privilege access, unify access controls, and improve the experience for users with comprehensive identity and network access solutions across on-premises and clouds. 

Microsoft Entra News and Insights | Microsoft Security Blog   ⁠⁠Microsoft Entra blog | Tech Community   ⁠Microsoft Entra documentation | Microsoft Learn  Microsoft Entra discussions | Microsoft Community  

 


Ontology

Securing Love in the Digital Age

How Decentralized Identity Can Revolutionize Dating Apps The recent analysis of 15 popular location-based dating (LBD) apps revealed alarming privacy and security vulnerabilities. These issues expose users to risks ranging from stalking and harassment to identity theft. Decentralized identity solutions, particularly ONT ID from Ontology Network, offer a promising approach to mitigate these c

How Decentralized Identity Can Revolutionize Dating Apps

The recent analysis of 15 popular location-based dating (LBD) apps revealed alarming privacy and security vulnerabilities. These issues expose users to risks ranging from stalking and harassment to identity theft. Decentralized identity solutions, particularly ONT ID from Ontology Network, offer a promising approach to mitigate these concerns.

Easy Account Creation and Verification

Problem: The study found that 7 out of 15 apps only require an email address to create an account, making it easy for adversaries to create fake profiles.

Solution: ONT ID can provide verifiable credentials for account creation without storing sensitive data on the app’s servers. This allows for robust user authentication while maintaining privacy.

Excessive Personal Data Exposure

Problem: Many apps expose large amounts of personal data in the user interface, including sensitive information like ethnicity and sexual orientation.

Solution: With ONT ID, users can selectively share only the necessary attributes for matchmaking. The decentralized nature ensures that users retain control over their personal information, reducing the risk of data breaches and unauthorized access.

Inadvertent Data Leaks

Problem: The study uncovered significant API traffic leaks, exposing data that users believed to be hidden.

Solution: By leveraging blockchain technology, ONT ID can ensure that only explicitly shared data is accessible. This aligns the user’s expectations with actual data exposure, eliminating inadvertent leaks through API traffic.

Location Privacy Vulnerabilities

Problem: 6 apps were found to be susceptible to exact location tracking through trilateration attacks.

Solution: ONT ID can implement privacy-preserving location verification. Users could prove their proximity to potential matches without revealing exact coordinates, protecting against stalking and location-based attacks.

Lack of User Control

Problem: Many apps provide limited options for users to control what data they share.

Solution: Decentralized identity enables granular consent management. Users can decide exactly what information is shared, with whom, and for how long, enhancing privacy and user agency.

By adopting decentralized identity solutions like ONT ID, dating apps can significantly enhance user privacy and security. This approach not only addresses the specific vulnerabilities identified in the study but also aligns with data protection principles such as data minimization and user control.

As the digital dating landscape evolves, integrating decentralized identity could be the key to fostering safer, more authentic connections online. It’s time for dating apps to prioritize user-centric privacy measures, ensuring that the quest for love doesn’t come at the cost of personal security.

Securing Love in the Digital Age was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


KuppingerCole

Policy Based Access Management

by Martin Kuppinger Efficient, effective management of access controls from infrastructure to applications remains an aspiration for enterprises. The main drivers of this goal include the need for strengthening the cybersecurity posture, efficiency gains in managing access controls, the need for consistency in access controls across multiple solutions and layers, and regulatory compliance. Most or

by Martin Kuppinger

Efficient, effective management of access controls from infrastructure to applications remains an aspiration for enterprises. The main drivers of this goal include the need for strengthening the cybersecurity posture, efficiency gains in managing access controls, the need for consistency in access controls across multiple solutions and layers, and regulatory compliance. Most organizations today struggle with a mixture of point solutions for managing access controls, many of these relying on static entitlements causing massive work and tending to become inaccurate. A consistent, policy-based solution for managing access controls ensures that the right people have the right access, at the right time, from the right place. This Leadership Compass features vendors offering policy-based access control solutions and provides guidance on aligning a vendor’s solution to common corporate access control requirements.

Ontology

Ontology Weekly Report (July 29th — August 5th, 2024)

Ontology Weekly Report (July 29th — August 5th, 2024) Welcome to this week’s Ontology report, where we highlight our continuous efforts to enhance our platform and expand our ecosystem through innovative developments, community engagement, and strategic partnerships. Here’s what’s been happening: Latest Developments Community Call with Plena Finance: We hosted a community call with Plena
Ontology Weekly Report (July 29th — August 5th, 2024)

Welcome to this week’s Ontology report, where we highlight our continuous efforts to enhance our platform and expand our ecosystem through innovative developments, community engagement, and strategic partnerships. Here’s what’s been happening:

Latest Developments Community Call with Plena Finance: We hosted a community call with Plena Finance, discussing potential synergies and collaborative opportunities to enhance financial solutions on the blockchain. Web3 Happenings — Privacy Discussion: Our latest Web3 Happenings focused on DAO Governance in Web3, a critical issue as we advance in the digital age. Be sure to join our next session to dive deeper into this vital topic. New Partnership with Moongate: We’re excited to announce a new partnership with Moongate, aiming to expand our technological reach and foster innovative blockchain solutions. Joining Ispolink: Ontology has joined Ispolink, enhancing our network connections and collaborative potential in the blockchain industry. Development Progress Go Toolkit Upgrade: We’ve successfully upgraded the Go toolkit for Ontology, enhancing the development experience and providing more robust tools for our developers. ONT Leverage Staking Design: Progress continues with our leverage staking design, currently at 65%, which aims to provide more flexible and profitable staking options. RPC Port Service Isolation: We have addressed and fixed issues related to the Ontology RPC port service isolation, improving security and functionality. Product Development $NEIRO Listing on ONTO: We’re pleased to announce that $NEIRO is now listed on ONTO, expanding the range of assets available to our users and enhancing trading possibilities. On-Chain Activity dApp Ecosystem Stability: The total number of dApps on our MainNet remains robust at 177. Transaction Growth: We observed an increase of 2,977 dApp-related transactions this week, totaling 7,779,915. Overall transactions on MainNet also saw a significant rise of 14,544, reaching 19,519,766. Community Growth Engaging Community Discussions: Our social media platforms, including Twitter and Telegram, continue to be lively spaces for community interactions and the latest updates. This week’s featured Telegram discussion led by Ontology Loyal Members focused on “Exploring Interoperable DID Solutions: Web2, Web3, and Beyond,” covering topics from customer knowledge to login systems and peer interactions. Stay Connected 📱

Engage with us and stay updated on the latest developments by following our social media channels. Your participation and feedback are invaluable as we continue to advance the blockchain and decentralized identity landscapes.

Ontology website / ONTO website / OWallet (GitHub)

Twitter / Reddit / Facebook / LinkedIn / YouTube / NaverBlog / Forklog

Telegram Announcement / Telegram English / GitHubDiscord

Thank you for your ongoing support and engagement. As we push forward, we are committed to delivering innovative solutions and fostering a more inclusive and secure digital future. Stay tuned for more updates and developments next week!

Ontology Weekly Report (July 29th — August 5th, 2024) was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


PingTalk

The Rise of Fraudulent Carriers: A Growing Threat to Freight Brokers

Truckstop is a trusted platform for brokers, shippers, and carriers. With Ping, Truckstop protects users from fraud while enhancing efficiencies and reliabilities.

Strategic theft continues to threaten the supply chain, with reported loss values exceeding $34 M in Q2 of 2024 alone. Specifically, the rise of fraudulent carriers in the freight market is posing significant challenges for freight brokers. This rampant increase is costing brokers lost revenue and damaged reputations. 

 

Fraudulent carriers often use fake credentials and stolen identities to secure loads, only to disappear with the goods, leaving brokers to deal with the fallout. As for carriers, they risk stolen identities which can result in personal and professional financial losses and reputational damage. This trend also erodes trust within the industry, making it harder for brokers to confidently engage with new carriers. For carriers, identity theft can limit their ability to secure the loads they need to keep their business moving. 


Identity-Centric Bank & Finance Regulations - UK

See which of UK’s financial regulations have stringent identity standards, and how identity access management helps achieve compliance.

Verida

Revamped Verida Network Explorer: Discover and Manage Your Digital Identity

Experience improved features for seamless navigation and discovery We are excited to unveil the newly revamped Verida Network Explorer, your comprehensive gateway to exploring identity and data on the Verida Network. As a layer zero DePIN, Verida secures your private data and provides confidential compute for secure personal AI assistants. Our goal is to empower users and developers by providing an
Experience improved features for seamless navigation and discovery

We are excited to unveil the newly revamped Verida Network Explorer, your comprehensive gateway to exploring identity and data on the Verida Network. As a layer zero DePIN, Verida secures your private data and provides confidential compute for secure personal AI assistants. Our goal is to empower users and developers by providing an enhanced tool to gain a thorough understanding of decentralized identities (DID) and activities on the Verida Network.

Discovering the Verida Network Explorer

The Verida Network Explorer offers a variety of features that allow you to gain valuable insights into your digital identity. Here’s a closer look at what you can do with this tool:

1. Search for Your Identity

The foundation of your digital identity is your unique DID (Decentralized Identifier) address. The Network Explorer allows you to easily search your identity by using your identifier (DID) within the Verida Network. Manage your Identity with your private key and take control of your digital world.

Developers: Learn more about Accounts and Identity Users: Create your DID with Verida Wallet 2. Examine Your Public DID Document and Metadata

Once you’ve located your Identity, the Network Explorer provides you with a view of your DID document, hosted on the decentralized Verida Network. Following the W3C standards, your DID document contains information describing the DID and its associated metadata, including associated application contexts.

3. Storage Node Distribution

Node Distribution section gives a geographical representation of the distribution of nodes across the globe. It helps in identifying where the nodes are located and ensures transparency in the storage and management of your data.

4. Storage Node Details

The List of Nodes section provides information about each node in the network, where you can see the node name, region, available slots, and status. You can also click on a node to open a dedicated page containing the node details. This information is crucial for developers and users to understand the network’s structure and performance.

Developers: Learn more about data storage on Verida Network 5. Overview of Storage Nodes on Verida Network (Coming Soon)

Coming soon is the Overview section, which provides a snapshot of the network’s storage capacity and utilization. You can see how much data is being used and how much capacity is available in the network. This helps in understanding the overall health and efficiency of the network.

Secure Your Data with Verida

Verida provides fast, low-cost infrastructure for private data and personal AI applications. As the first self-sovereign data network, Verida enables developers to build applications where users can manage their identity, crypto, data, and reputation.

You have a power to own, control, and delete every part of your digital footprint.

Unlocking the Power of Transparency

The Verida Network Explorer is your window to discover and manage your digital identity effectively. Whether you’re a user seeking insights into your data or a developer integrating with the Verida Network, this tool is your go-to resource.

To help you make the most of the Verida Network Explorer, we’ve prepared a comprehensive User Guide with step-by-step instructions and tips. For developers, there are technical docs for learning more about accounts and identity, application contexts, and data storage on the Verida Network.

Thank you for being a part of the Verida community as we shape the future of digital identity together. Stay tuned for more exciting updates!

About Verida

Verida is a pioneering decentralized data network and self-custody wallet that empowers users with control over their digital identity and data. Utilizing cutting-edge technology such as zero-knowledge proofs and verifiable credentials, Verida offers secure, self-sovereign storage solutions and innovative applications for various industries. We are also at the forefront of developing privacy-preserving personalized AI solutions. For more information, visit Verida.

Verida Missions | X/Twitter | Discord | Telegram | LinkedInLinkTree

Revamped Verida Network Explorer: Discover and Manage Your Digital Identity was originally published in Verida on Medium, where people are continuing the conversation by highlighting and responding to this story.

Tuesday, 06. August 2024

Indicio

Indicio and DNP partner to offer Indicio Academy certified training in decentralized identity technology to the Japanese market

The post Indicio and DNP partner to offer Indicio Academy certified training in decentralized identity technology to the Japanese market appeared first on Indicio.

Indicio Academy offers training and certification in all aspects of the business and technology of decentralized identity. The partnership signals growing interest in verifiable identity and data solutions in Japan.

TOKYO, Aug. 6, 2024 /PRNewswire/ — Indicio, the global market leader in decentralized identity technology, today announced an agreement to give exclusive Japanese license rights to market and distribute Indicio’s training workshops and professional certification program, Indicio Academy, to Dai Nippon Printing Co., Ltd. (DNP). The Indicio Academy is a comprehensive training and certification program for decentralized identity technology that provides both business executives and technical developers with the skills required to create and implement seamless privacy-preserving technology for the instant verification of data and identity.

Leveraging Indicio’s expertise and leading-edge innovation in implementing decentralized identity solutions for enterprises and governments around the world, the Indicio Academy offers courses on solution architecture, system governance design and implementation, and business development and marketing. The Indicio Academy prepares teams to architect, build, market and most importantly, deploy new products and services around immediately actionable and trusted data.

“We have seen intense interest in decentralized identity technology in Japan, and we’ve already had great success in providing Indicio Academy courses,” said Heather Dahl, co-founder and CEO of Indicio. “So, this partnership is tremendously exciting in terms of the appetite for innovation and digital transformation we’re seeing in the Japanese market. We created the Indicio Academy because new and powerful technologies need to be understood from all angles — through every aspect of design and implementation and marketing — if they are to be successful. We’re using the knowledge behind our success and the successes of our customers to ensure that others who adopt this technology can get everything they and their customers need from it.”

DNP will provide and support Indicio Academy training and certification in Japan and its territories and will be responsible for translation, presentation and course facilitation for professionals who are interested in decentralized identity.

“DNP was immediately impressed with Indicio Academy from the moment our in-house team was trained and certified. We used the Indicio Academy as an opportunity to let people know what they can do in this burgeoning technology field and formed an active partnership to offer the program with the goal of gaining business partners,” said Takahito Kanazawa, Managing Director, Head of Advanced Business Center, DNP.

The Indicio Academy also provides the opportunity for course participants to earn professional certificates related to furthering their careers. Academy certifications include:

Verifiable Data Fundamentals: Foundational coursework introducing the concepts and components of decentralized identity, designed for professionals to learn how the technology works and its practical applications.

Business Professional: Designed for executives, directors, business development, marketing, communications, public relations, and sales focusing on the business of decentralized identity, value creation, and how to monetize verifiable data.

Technology Professional: Advanced series designed for developers, engineers, and architects covering agent architecture, key technology components, and how to build and run a successful solution.

Indicio will also offer further support to companies building verifiable credential solutions in any industry vertical, with their flagship product Indicio Proven®. This complete system offers a simple and fast way to add lightning-fast digital identity and data verification to any business process or application.

Both companies will focus on driving digital transformation by using this technology anywhere trust is vital, information needs to be exchanged, access needs to be given, identity and data authenticity and integrity are essential.

Please visit Indicio to learn more.

####

Sign up to our newsletter to stay up to date with the latest from Indicio and the decentralized identity community

The post Indicio and DNP partner to offer Indicio Academy certified training in decentralized identity technology to the Japanese market appeared first on Indicio.


Ontology

Decentralized Identity

The Key to Securing Blockchain Gaming In the wake of a high-profile hack targeting Atari’s blockchain-based Asteroids game, the gaming industry faces a crucial crossroads. The incident, which saw developer Kautuk Kundan manipulate the game’s scoreboard on the Base network, has exposed vulnerabilities in the current blockchain gaming ecosystem. This breach not only compromises the integrity of com
The Key to Securing Blockchain Gaming

In the wake of a high-profile hack targeting Atari’s blockchain-based Asteroids game, the gaming industry faces a crucial crossroads. The incident, which saw developer Kautuk Kundan manipulate the game’s scoreboard on the Base network, has exposed vulnerabilities in the current blockchain gaming ecosystem. This breach not only compromises the integrity of competitive gameplay but also raises serious questions about the security and true decentralization of “on-chain” games.

The timing couldn’t be more critical, with the blockchain gaming sector experiencing a 28% decline in June 2024. However, this challenge presents an opportunity for innovation, particularly in the realm of decentralized identity solutions like ONT ID.

ONT ID, based on W3C standards for decentralized identifiers and verifiable credentials, offers a robust framework for enhancing security and user control in blockchain gaming. By implementing such solutions, platforms could:

Fortify defenses against unauthorized access and data manipulation Empower users with true ownership of their digital identities and assets Enhance privacy through selective information sharing Enable seamless cross-platform and cross-chain interactions Build trust through transparent, verifiable gaming ecosystems

As the blockchain gaming industry evolves, integrating decentralized identity solutions like ONT ID could be the key to unlocking a more secure, user-centric, and genuinely decentralized gaming future. This approach not only addresses current vulnerabilities but also preserves the innovative potential of blockchain technology in gaming.

By embracing decentralized identity, the gaming industry can transform recent setbacks into opportunities for growth, fostering an environment where players can enjoy enhanced security, privacy, and control over their digital experiences.

Decentralized Identity was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


KuppingerCole

Identity Governance and Administration

by Nitish Deshpande This Leadership Compass Identity Governance and Administration (IGA) provides an overview of the IGA market and a compass to help you find a solution that best meets your needs. It examines solutions that provide both identity lifecycle management and access governance capabilities. Solutions have been assessed based on certain defined required core capabilities that can suppor

by Nitish Deshpande

This Leadership Compass Identity Governance and Administration (IGA) provides an overview of the IGA market and a compass to help you find a solution that best meets your needs. It examines solutions that provide both identity lifecycle management and access governance capabilities. Solutions have been assessed based on certain defined required core capabilities that can support organizations in activities such as provisioning, management of entitlements, configuration and enforcement of policies, access certifications, access reviews and user self-service among other. It provides an assessment of the capabilities of these solutions to meet the needs of all organizations to monitor, assess, and manage these risks

PingTalk

Identity-Centric Bank & Finance Regulations - North America

See which North American financial regulations have stringent identity standards, and how identity access management helps achieve compliance.

The regulatory landscape of identity-centric laws differs between the US, Canada, and individual states and provinces in either country. However, both nations share an overarching goal to protect consumer identities and keep sensitive data secure while upholding safe and fair financial markets.

 

Today, identity-centric regulations are highly important to the financial industry in North America, ensuring compliance and the security of private data. As a result, financial service providers can establish and maintain trust in the industry, preventing the risk of fraud, stolen identities, and other cybersecurity incidents that negatively impact both consumers and institutions.

 

In this blog, we’ll review the major identity and access management (IAM)-related regulations in the United States and Canada that dictate how financial service providers must handle customer identities.


BlueSky

Bluesky Welcomes Mike Masnick to Board of Directors

We’re thrilled to announce that Mike Masnick has joined Bluesky’s Board of Directors.

We’re thrilled to announce that Mike Masnick has joined Bluesky’s Board of Directors. He is the author of the Protocols, not Platforms paper that first inspired the Bluesky initiative, and he is the founder and editor of Techdirt, among other accomplishments.

Mike has been an early supporter of Bluesky’s mission to create a global, open social network, as full of possibility as the early web. In the past, we’ve gone to Mike for inspiration and advice already, and formalizing that relationship is the natural next step. His deep understanding of our approach — iterating towards widespread adoption while enabling trust & safety in a decentralized system — makes him an invaluable addition to our board.

As Bluesky’s network of more than 6 million users continues to grow, we’re excited to tap into Mike’s expertise as a reporter, editor and publisher. His familiarity with how policy, technology, and legal issues affect a company’s ability to innovate and grow is directly relevant to Bluesky, an open social network challenging incumbents who have kept innovation locked behind closed doors for the last decade.

“Mike's work has been an inspiration to us from the start,” says Jay Graber, CEO of Bluesky. “Having him join our board feels like a natural progression of our shared vision for a more open internet. His perspective will help ensure we're building something that truly serves users as we continue to evolve Bluesky and the AT Protocol.”

Mike shares his enthusiasm below:

“I’m excited to join the Bluesky board and to support its vision of building an open social network. Over the last few years, I’ve been thrilled to see how the Bluesky team has turned these ideas into reality, and I look forward to helping the company continue to build a better internet.”

Mike’s balanced perspective and strong advocacy for open networks will play a pivotal role in shaping the future of Bluesky and the AT Protocol. You can follow Mike Masnick on Bluesky here.

Monday, 05. August 2024

Spruce Systems

Who Should Build a Digital Wallet?

A guide for digital credential issuers deciding between an off-the-shelf digital wallet and custom wallet software.

Digital wallets manage digital credentials, assets, or authorizations. The most familiar digital wallet is probably Apple Wallet, which hundreds of millions of people use to store and use virtual credit cards and event tickets. For the growing number of states leading the shift towards digitizing identification documents, the most important role of digital wallets is storing and controlling users’ state-issued driver’s licenses and, soon, other state-issued identification, certifications, or licenses.

Digital wallets are primarily made for smartphones, where they interface with secure hardware and cryptographic software to ensure that credentials they store are secure and trusted. So it’s natural that the most widely-used wallet software is created by hardware and operating system creators (known as “original equipment manufacturers,” or OEMs) like Google and Apple, the driving forces behind most of the world’s smartphones. They know the hardware, and they have very smart teams.

However, default OEM digital wallets do have disadvantages. If you’re an enterprise or government hoping to give your users (or residents) the full benefit of the transition to digital identity, there are good reasons to build your own digital wallet software rather than relying on OEM wallets to have all the features you need. 

In brief, we believe there are two main reasons for an entity to build its own digital wallet. First, if your brand is highly trusted by end users, as might be the case for a state issuing digital driver’s licenses, building your own can dramatically impact adoption rates. The second major consideration is whether your in-house option would represent a big improvement in usability over the manufacturers’ default option, for instance, in applications requiring highly tailored features. 

The Off-The-Shelf Option

There are many benefits to using an existing OEM wallet. Most clearly, it requires fewer resources from your team, both for development and support. There is already an enormous user base with the Apple or Google Wallet already installed.

Even more importantly, users of the OEM wallets will already be familiar with how to use these wallets and the nuances of the user experience. By the time they use their wallet to present the credentials your organization creates, they will have already used these wallets in their day-to-day lives for shopping or tickets. The “tap to pay” user experience that’s now widespread with phone-based payment apps is a very accessible “on-ramp” for using digital identity credentials, but especially when dealing with vital interactions involving official documents used by nearly the entire population, accessibility is paramount.

Big-name wallets also have privileged access to some of a mobile device’s hardware capabilities.  These can unlock additional, and sometimes important, functionality. That includes advanced security features, such as Near Field Communication (NFC), which can make verification more streamlined. NFC functionality allows for a verification interface to quickly pop up as a user holds their smartphone close to a verifier’s reader device, rather than requiring the user to open a separate application to initiate a verification interaction. This can make certain user interactions faster and more seamless for end users. Currently, this functionality is only supported for OEM wallet implementations or those who receive special permissions from the OEM providers to implement them in applications.

There are also convenience or security features that might only be possible for software created by device manufacturers themselves, such as letting users present a credential when a phone is locked or making certain credentials usable even when a device battery is nearly empty. For credentials that might be vital in unexpected or unusual circumstances, such as medical certifications, these features could trump other considerations.

The Advantages of an In-House Digital Wallet Design

In some cases, the advantages of manufacturer software may be outweighed by the greater flexibility, hands-on support, and tailoring to specific use cases made possible by wallet software designed specifically for your users.

Above all, creating your own wallet software is the best way to ensure you can give users exactly the features and experience they want, quickly, in an appealing, easy to use, and trusted package. Longer term, controlling your own software also increases your ability to build a relationship with your end users and get the most out of advances in digital identity, instead of being beholden to the product roadmaps of large technology companies.

The biggest tech companies, remember, serve an immense user base, and outside requests for changes or updates are handled by a comparatively small product management team. If you find your wallet needs a specific feature not already offered by an off-the-shelf wallet, you and your users could be waiting for request updates behind hundreds of other priorities.

The ability of large tech companies to serve such a huge and diverse customer base relies on “App Stores” that offer independently-developed apps. The built-in assumption is that when a smaller group of users have highly specific needs, someone will build a tailored solution for them.

That matters because current OEM wallets are designed for a generic baseline user, and only have a fraction of the functionality that digital credential systems will make possible. Most notably, wallets from big tech companies currently only support a limited subset of the credentials that can be issued digitally.

There are also significant nuances to how a digital wallet communicates with credential providers, secures user information, and handles various identity formats, which have downstream impacts on security and user experience. In the case of government identity systems, that can impact the ability to link additional digital services to an identity credential, or to control processes like renewing digital credentials.

Wallets may also need different security standards depending on their application – a pass to a secure corporate facility is generally more sensitive than a concert ticket. So a highly secure corporate entity is likely to want to build its own wallet, with more rigorous onboarding. A consumer-focused app for concert tickets, by contrast, might want a less rigorous process that prioritizes ease of use. Some digital wallets may even want to incorporate “decentralized” identity signals like social media accounts for lower-security or community-based verifications.

Data policy is another reason to consider a home-grown wallet solution. The State of California collaborated with SpruceID to create its own digital wallet software, which, among other benefits, allowed them to create a user privacy policy different from the big tech companies’ standard agreement. In some cases this might be necessary to fully comply with local privacy regulations. It may also have benefits for user adoption: some users may be skeptical of the privacy practices of large conglomerates and more likely to trust a wallet created by an official body.

Many of these nuances must be implemented in the wallet software itself, whether specific user-facing interactions or back-end architecture. However, it’s difficult to push for any specific changes or features from a big tech company. Even if you’re representing the government of a sizable state, it's like trying to steer a massive aircraft carrier; it takes significant effort and time to change direction even if there is mutual interest.

Flexibility for a Dynamic Ecosystem

The direct advantages of building your own digital wallet are significant and can be expected to lead to better features, higher trust, more adoption, and more satisfied users than relying on OEM software. However, owning the development process grants another, potentially even more important advantage: helping ensure that your team and users can support the latest advancements in digital credential technologies across different industries, and not just the ones that make it to mass-market deployment via OEM wallets.

For example, state DMVs largely favor the “mobile driver’s license” family of digital credential standards (ISO/IEC 18013-5 mDL), and OEM wallets also privilege the mDL standard. But many educational institutions, for example, prefer the OpenBadges standard by the 1EdTech educational consortium, an alternative format built on the W3C’s Verifiable Credentials. Numerous other use cases are built using W3C Verifiable Credentials, such as Microsoft’s Entra Verified ID product, C2PA for content authenticity (a specification supported by Adobe, OpenAI, and Google), and GS1’s digital supply chain integrity efforts. Further, the EU Digital Identity efforts include SD-JWTs.

The downsides are on the other side of the coin: it is not the most economical nor convenient option for the organization to own the development of a digital wallet. Today, it takes managers with an understanding of emerging digital credential technologies, and vendors with specialized skill sets to make it happen well. This is one reason why we build open source software to allow any organization to build a wallet on a strong base set of lego blocks.

It’s still early in the development of these tools, and a plethora of solutions are emerging simultaneously. The ability to tailor your in-house wallet to a non-mDL standard is just one example of the flexibility that doing it yourself allows.

Different industries will prefer different ways to handle their exchanges of authentic data, and we believe that the market has progressed to the point where bottom-up development is more likely than “one format to rule them all.” Therefore, those looking to enable functionalities across different industries may need to consider building their own wallets to ensure support for their own use cases, especially if they are cross-vertical. For instance, a shipping authorization that refers to both a cargo truck (supply chain) and its driver (personal identity) could require tailored features. Providing strong support for end-to-end use cases may require integrating many different technologies, something custom software excels at but is less common for stock OEM capabilities.

Setting the Right Priorities

We believe that, ultimately, the decision to use an OEM wallet or to build (or enhance an existing app into) a new one should be based on a few specific factors.

First, program leaders should decide which option provides the most value to end users. So for instance, if a home-grown wallet has the potential to make the end-to-end experience ten times better than an OEM wallet's, that would be a major reason to build your own.

Second, ask which approach best meets expectations for usability, security, and privacy. That can include nuanced technical considerations but also the more basic question of branding. If your user base is more likely to trust a wallet carrying your brand, such as a state government, that might suggest an in-house wallet will drive better adoption.

The third big-picture consideration is whether your solution is sustainable in the long term. For instance, building your own wallet might be a mistake if you can’t guarantee an ongoing budget not only for development, but support and updates for, quite likely, many years to come. When vendors work on the same set of technology standards, there is less lock-in, more competitive pricing, and better parallelization without sacrificing overall interoperability.

At SpruceID, we help governments and enterprises navigate these complex considerations, and our products simplify this whole process. If you’d like to discuss a specific use case for digital wallets, and would like us to weigh in on using OEM or building your own, please schedule a chat.

Contact Us

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.


Microsoft Entra (Azure AD) Blog

Microsoft Entra ID Governance licensing clarifications

In the past few weeks, we’ve announced the general availability of Microsoft Entra External ID and Microsoft Entra ID multi-tenant collaboration. We’ve received requests for more detail from some of you regarding licensing, so I’d like to provide additional clarity for both of these scenarios.   One person, one license   Included in the first announcement of more multi-tenant org

In the past few weeks, we’ve announced the general availability of Microsoft Entra External ID and Microsoft Entra ID multi-tenant collaboration. We’ve received requests for more detail from some of you regarding licensing, so I’d like to provide additional clarity for both of these scenarios.

 

One person, one license

 

Included in the first announcement of more multi-tenant organization (MTO) features to enhance collaboration between users, we stated that only one Microsoft Entra ID P1 license is required per employee per multi-tenant organization. Expanding on that, the term “multi-tenant organization” has two descriptions: an organization that owns and operates more than one tenant; and a set of features that enhance the collaboration experience for users between these tenants. However, your organization doesn’t have to deploy those capabilities to take advantage of the one person, one license philosophy. An organization that owns and operates multiple tenants only needs one Entra ID license per employee across those tenants. The same philosophy applies to Entra ID Governance: the organization only needs one license per person to govern the identities of these users across these tenants.

 

Note that this philosophy includes administrative accounts. In some organizations, administrators use standard user accounts for day to day tasks, and separate administrator accounts for privileged access. A person with a standard user account and an administrator account only needs one Entra ID Governance license for both identities to be governed. Of course, they could also leverage Entra ID Governance’s Privileged Identity Management (PIM) to temporarily elevate the access rights of a single account, instead of maintaining two accounts.

 

To illustrate this scenario, let’s consider an organization called Contoso, which owns ZT Tires and Tailspin Toys. Mallory is hired by Contoso, which uses Lifecycle Workflows in Entra ID Governance to onboard her user account and grant her access to the resources she needs for her job. Her account receives an access package with an entitlement to ZT Tires’ ERP app, and she requests access to Tailspin Toys inventory management app. Because Mallory has an Entra ID Governance license in the Contoso tenant, her identity can be governed in the ZT Tires and Tailspin Toys tenants with no additional governance licenses – one person, one license.

 

Diego is an identity administrator whose user account is in the ZT Tires tenant. He uses a separate administrator account for privileged access tasks in Contoso, Tailspin Toys, and ZT Tires tenants. Because Diego has an Entra ID Governance license in the ZT Tires tenant, both his user and administrator identities can be governed in all three tenants with no additional governance licenses – again, one person, one license.

 

Entra ID Governance in Microsoft Entra External ID

 

The other announcement covered Entra External ID, Microsoft’s solution to secure customer and business collaborator access to applications. In November, I blogged about the licensing model to govern the identities of business guests in the B2B scenario for Entra External ID and shared that pricing would be $0.75 per actively governed identity per month. Because metered, usage-based pricing to govern the identities of business guests is a different model than the existing, licensed-based pricing model to govern the identities of employees, I’d like to share more detail.

 

A business guest identity in Entra External ID will accrue a single $0.75 charge in any month in which that identity is actively governed, no matter how many governance actions are taken on that identity. For example: 

 

A Contoso employee named Gerhart collaborates with Pradeep of Woodgrove Bank to produce Contoso’s quarterly financial statements. Contoso has deployed Entra External ID for its business partners such as Woodgrove Bank. In April, Pradeep accesses Contoso’s Microsoft Teams where Gerhart stores his quarterly reporting documents, but his Entra External ID has no identity governance actions taken on them, so it doesn’t accrue any charges.

 

In May, Pradeep receives an access package with an entitlement to Contoso’s accounting system, and Gerhart reviews Pradeep’s existing access to Contoso’s inventory management database, as well as to the Teams with the quarterly reporting documents. Because Pradeep’s identity in Entra External ID had identity governance actions taken on it, Contoso will accrue a $0.75 charge. Note that the charge is applied once, even though there were three identity governance actions taken during the month. Once that Entra External ID identity was governed in May, additional identity governance actions do not generate additional charges for that identity in May.

 

To learn more about Microsoft Entra ID Governance licensing, visit the Licensing Fundamentals page.

 

 

Read more on this topic 

Entra ID multi-tenant collaboration  Microsoft Entra External ID general availability 

 

Learn more about Microsoft Entra  

Prevent identity attacks, ensure least privilege access, unify access controls, and improve the experience for users with comprehensive identity and network access solutions across on-premises and clouds. 

Microsoft Entra News and Insights | Microsoft Security Blog   ⁠⁠Microsoft Entra blog | Tech Community   ⁠Microsoft Entra documentation | Microsoft Learn  Microsoft Entra discussions | Microsoft Community  

 


Ontology

Decentralized Identity

The Key to Robust DAO Governance Decentralized Autonomous Organizations (DAOs) represent a revolutionary approach to collective decision-making and resource management in the Web3 era. However, their effectiveness hinges on a critical factor: the integrity of participant identities. This is where decentralized identity solutions, such as Ontology Network’s ONT ID, play a pivotal role. The c
The Key to Robust DAO Governance

Decentralized Autonomous Organizations (DAOs) represent a revolutionary approach to collective decision-making and resource management in the Web3 era. However, their effectiveness hinges on a critical factor: the integrity of participant identities. This is where decentralized identity solutions, such as Ontology Network’s ONT ID, play a pivotal role.

The challenge of Sybil attacks — where malicious actors create multiple fake identities to manipulate voting outcomes — poses a significant threat to DAO governance. Traditional identity systems, reliant on centralized authorities, are ill-equipped to address this issue in the decentralized world of DAOs.

Decentralized identity frameworks offer a compelling solution. By leveraging blockchain technology, these systems enable individuals to prove their uniqueness and credentials without compromising privacy. ONT ID, for instance, allows users to selectively disclose verified attributes, ensuring both anonymity and accountability in DAO participation.

The benefits of integrating decentralized identity into DAO governance are manifold:

Sybil resistance: Verifiable, unique identities prevent vote manipulation. Enhanced trust: Participants can be confident in the integrity of governance processes. Granular access control: DAOs can implement nuanced voting rights based on verified credentials. Privacy preservation: Members retain control over their personal information.

Ontology Network’s approach to decentralized identity is particularly well-suited for DAO applications. Its high-speed, low-cost blockchain infrastructure ensures scalability, while the ONT ID framework provides the flexibility needed for diverse DAO governance models.

As DAOs continue to evolve and gain prominence in the digital economy, the integration of robust decentralized identity solutions will be crucial. By leveraging technologies like ONT ID, DAOs can foster truly democratic, transparent, and secure governance structures — paving the way for a more equitable and decentralized future.

Decentralized Identity was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


Microsoft Entra (Azure AD) Blog

Microsoft Entra Suite now generally available

Today we announced the general availability of Microsoft Entra Suite - the industry’s most comprehensive secure access solution for the workforce. The Microsoft Entra Suite delivers the most comprehensive Zero Trust user access solution and enables organizations to converge access policy engine across identities, endpoints, and private and public networks.     What is Microsoft

Today we announced the general availability of Microsoft Entra Suite - the industry’s most comprehensive secure access solution for the workforce. The Microsoft Entra Suite delivers the most comprehensive Zero Trust user access solution and enables organizations to converge access policy engine across identities, endpoints, and private and public networks.  

 

What is Microsoft Entra Suite? 

The Microsoft Entra Suite delivers a complete cloud-based solution for workforce access. It brings together identity and network access that secures employee access to any cloud or on-premises application and resource from any location, consistently enforces least privilege access, and improves the employee experience.​  

 

This new offering advances our vision for the Microsoft Entra product line that can serve as a universal trust fabric for the era of AI, securely connecting any trustworthy identity with anything, from anywhere. In a recent blog post we also shared the four stages of creating such trust fabric for your organization, starting with foundational Zero Trust controls, and extending it to protecting access for your workforce, protecting access for your customers and partners, and protecting access in any cloud. The Microsoft Entra Suite delivers the complete toolset for the second stage of this journey – secure access for your workforce.  

 

The Microsoft Entra Suite includes the following products:  

 

 

 

 

Microsoft Entra Private Access – an identity-centric Zero Trust Network Access that secures access to private apps and resources and reduces operational complexity and cost by replacing legacy VPNs.  Microsoft Entra Internet Access – an identity-centric Secure Web Gateway (SWG) for SaaS apps and internet traffic that protects against malicious internet traffic, unsafe or non-compliant content, and other threats from the open internet.  Microsoft Entra ID Governance – a complete identity governance and administration solution that automates identity and access lifecycle to ensure that the right people have the right access to the right apps and services at the right time.  Microsoft Entra ID Protection – an advanced identity solution that blocks identity compromise in real time using high-assurance authentication methods, automated risk and threat assessment, and adaptive access policies powered by advanced machine learning (also included in Microsoft Entra ID P2).   Microsoft Entra Verified ID - a managed verifiable credentials service based on open standards that enables real-time identity verification in a secure and privacy respecting way. Included in the Microsoft Entra Suite are premium Verified ID capabilities, starting with Face Check.     Microsoft Entra Suite enables you to:  Unify Conditional Access policies for identities and networks.  Ensure least privilege access for all users accessing all resources and apps.  Improve the user experience for both in-office and remote workers.  Reduce the complexity and cost of managing security tools from multiple vendors. 

 

Check out the Microsoft Entra Suite introductory video below:

 

 

Unify Conditional Access policies for identities and networks 

You only have to manage one set of policies in one portal to configure access controls for both identities and networks. Conditional Access evaluates any access request, no matter where it’s coming from, performing real-time risk assessment to strengthen protection against unauthorized access.  

 

Ensure least privilege access for all users accessing all resources and apps 

You can automate the access lifecycle from the day a new employee joins your organization, through all their role changes, until the time of their exit. No matter how long or multifaceted an employee’s journey, Microsoft Entra ID Governance ensures that your employees have the right access to just the applications and resources they need, helping prevent an adversary’s lateral movement in case of a breach.  

 

Improve the user experience for both in-office and remote workers 

You can ensure that employees enjoy a faster and easier onboarding experience, faster and more secure sign-in via passwordless authentication, single sign-on for all applications, and superior performance. Using a self-service portal, your employees can request access to relevant packages, manage approvals and access reviews, and view request and approval history. Face Check with Microsoft Entra Verified ID enables real-time verification of your employee's identity, which streamlines remote onboarding and self-service recovery of passwordless accounts.  

 

Reduce the complexity and cost of managing security tools from multiple vendors 

Since traditional on-premises security solutions don’t scale to the needs of modern cloud-first, AI-first environments, organizations are seeking ways to secure and manage their assets from the cloud. With the Microsoft Entra Suite, you can retire multiple on-premises security tools, such as traditional Virtual Private Networks (VPNs), on-premises Secure Web Gateways (SWGs), and on-premises identity governance. 

 

Microsoft Entra Suite is currently priced at $12 per user per month. Microsoft Entra P1 is a licensing and technical prerequisite. Please refer to the Microsoft Entra Suite pricing page for more detail. 

 

 

Join us for upcoming events! 

We encourage you to watch the Zero Trust spotlight on demand, where Microsoft experts and thought leaders dove deeper into these and other announcements, including the general availability of Entra Internet Access and Entra Private Access, which is part of the Microsoft Entra Suite.  

 

Additionally, register for the Tech Accelerator to join us on August 14, 2024, for a deep dive into the Microsoft Entra Suite, and Private Access and Internet Access products. 

 

 

Learn More 

The availability of the Microsoft Entra Suite marks a key milestone in our commitment to continue to provide a more seamless and robust secure access experience that will empower the workforce anywhere and everywhere. Learn more from the official announcement

 

Visit the Microsoft Entra Suite trial page to get started. 

 

Irina Nechaeva, General Manager, Identity and Network Access Product Marketing 

 

 

Read more on this topic 

Watch the Microsoft Entra Suite mechanics video  Microsoft Entra product page Microsoft Entra portal 

 

Learn more about Microsoft Entra  

Prevent identity attacks, ensure least privilege access, unify access controls, and improve the experience for users with comprehensive identity and network access solutions across on-premises and clouds. 

Microsoft Entra News and Insights | Microsoft Security Blog   ⁠⁠Microsoft Entra blog | Tech Community   ⁠Microsoft Entra documentation | Microsoft Learn  Microsoft Entra discussions | Microsoft Community  

Dock

eIDAS 2.0: A Beginner's Guide

Professionals in identity companies often grapple with the complexities of evolving digital ID regulations. They must keep up with these changes to ensure compliance and leverage new opportunities. That's where eIDAS 2.0—the latest update to the European Union's digital identity framework—comes

Professionals in identity companies often grapple with the complexities of evolving digital ID regulations. They must keep up with these changes to ensure compliance and leverage new opportunities.

That's where eIDAS 2.0—the latest update to the European Union's digital identity framework—comes in.

Full article: https://www.dock.io/post/eidas-2


KuppingerCole

Diving Deeper: Recent Insights From the KuppingerCole Analysts’ Cybersecurity Council Meeting

by Berthold Kerl In the fast-changing landscape of cybersecurity, cooperation and sharing insights among professionals are essential for addressing challenges and influencing the future of digital safety. The KuppingerCole Analysts’ Cybersecurity Council, a notable group of more than 30 Chief Information Security Officers (CISOs) from various sectors, gathered for its second meeting of 2024 on Ju

by Berthold Kerl

In the fast-changing landscape of cybersecurity, cooperation and sharing insights among professionals are essential for addressing challenges and influencing the future of digital safety. The KuppingerCole Analysts’ Cybersecurity Council, a notable group of more than 30 Chief Information Security Officers (CISOs) from various sectors, gathered for its second meeting of 2024 on June 5 at the European Identity & Cloud Conference (EIC). This gathering continued the discussions initiated by the council on February 28, 2024, covering several important topics.

Diving Deeper into Cybersecurity Frontiers

The council's meeting agenda was rich and varied, reflecting the breadth and depth of challenges that cybersecurity professionals face today. Key topics discussed included:

Defense against Mis/Disinformation: The World Economic Forum’s 2024 Global Risk Report has stated that disinformation is the world’s top risk in the next two years. US Navy veteran Dr. Pablo Breuer and former US rep at the World Trade Organization Daniella Taveau provided insights into these risks and how organizations can mitigate them. The recommendations and outcomes highlight the importance of developing a comprehensive response plan for information across the organization. Proactive measures against disinformation should be implemented prior to any incidents. It is crucial to educate executives about the risks associated with deep fakes. Additionally, users and clients need guidance on where to find credible information and how to identify and report misleading content. Furthermore, it is essential to revise authentication processes for operations considered high-risk.

Harmonizing Regulatory Requirements: CISOs struggle with multi-regulatory requirements, which are sometimes unclear or even conflicting. KuppingerCole Analysts are working on a whitepaper that can serve as an open letter to authorities, as well as working on a tool to support multi-regulatory compliance, the KuppingerCole Compliance Navigator. Martin Kuppinger and Matthias Reinwarth jointly discussed this initiative. 

Passwordless for Consumers: Alejandro Leal, Senior Analyst at KuppingerCole Analysts, presented his latest Leadership Compass, which provides a comprehensive overview of the Passwordless Authentication for Consumers market. As demand for seamless and secure authentication experiences rises, the market for these solutions has grown significantly.

Cybersecurity Recommendations for 2024-2033: Annie Bailey, Research Director KuppingerCole presented the final workshop results for Recommendations 2024-2033. This report is based on work with experts and provides 8 recommendations for CISOs in preparing for 2033, such as CISOs should prioritize advocacy for resilience and recovery, maintaining fundamental cyber hygiene, and understanding the adversaries they face. Collaboration within the cybersecurity sector is essential to enhance transparency and security throughout supply chains. It's important to view AI not only as a potential risk but also as a valuable tool for mitigating those risks. A comprehensive approach to user-centric security is necessary, and identity security should be integral to the organization’s overall security framework. Additionally, CISOs must take a more proactive role in influencing both national and international regulations.

cyberevolution 2024: Berthold Kerl shared the preliminary event agenda, which covers 18 topics. The conference, set to place from December 3, 2024, to December 5, 2024, aims to blend discussions on futuristic cybersecurity innovations with foundational cyber hygiene practices, maintaining a global perspective with a strong European focus. Next Steps

The council's next meeting is scheduled for September 4th, 2024, promising to further the dialogue on these critical topics, fostering deeper insights and strategies to navigate the complex cybersecurity landscape. The following final meeting of 2024 will take place on December 4th, 2024, onsite during cyberevolution event in Frankfurt.

As the KuppingerCole Analysts’ Cybersecurity Council continues its vital work, the insights and outcomes from its meetings are a testament to the power of collaboration in advancing the field of cybersecurity. Through the shared expertise of its members, the council not only addresses the challenges of today but also shapes the cybersecurity frameworks of tomorrow.

Sunday, 04. August 2024

KuppingerCole

Lessons Learned from the CrowdStrike Incident

Matthias, Martin, John, Alexei, and Mike discuss the recent CrowdStrike incident and its impact on global players. They highlight the need for better software testing and validation processes to prevent such incidents. The conversation also touches on the importance of diversity in software solutions and the role of regulation in ensuring security. The analysts suggest measures such as phased roll

Matthias, Martin, John, Alexei, and Mike discuss the recent CrowdStrike incident and its impact on global players. They highlight the need for better software testing and validation processes to prevent such incidents. The conversation also touches on the importance of diversity in software solutions and the role of regulation in ensuring security. The analysts suggest measures such as phased rollout of updates, automated risk scoring, and improved backup and recovery processes. They emphasize the need for organizations to have resilience plans in place and to evaluate the tools and vendors they rely on.



Friday, 02. August 2024

KuppingerCole

Software Supply Chain Security: Are You Importing Problems?

by Alexei Balaganski Software supply chain security (SSCS) is a really curious subject. On the one hand, nearly everyone has an intuitive understanding of what SSCS means and how critical it can be for the success of a modern digital business. After all, we have seen the consequences of multiple large-scale incidents recently, which have all been labelled “supply chain attacks” by the press. A

by Alexei Balaganski

Software supply chain security (SSCS) is a really curious subject. On the one hand, nearly everyone has an intuitive understanding of what SSCS means and how critical it can be for the success of a modern digital business. After all, we have seen the consequences of multiple large-scale incidents recently, which have all been labelled “supply chain attacks” by the press.

A bit of history

Perhaps the first widely known event of this kind was the notorious SolarWinds hack in 2020, when a malicious actor managed to inject malware into a popular IT management tool that was then deployed to thousands of clients and used as an attack vector in multiple security breaches. In late 2023, we had the breach at Okta, a leading identity provider, that affected many of their enterprise customers, including several security vendors (who were, luckily, the first to raise the alarm). Finally, just a couple of weeks ago the entire world observed the catastrophic consequences of the botched software update by CrowdStrike, that literally grounded entire airlines and forced multiple banks and hospitals to halt their operations.

On the other hand, there still seems to be no common agreement on what exactly defines an issue as a supply chain attack and consequently, who should be responsible for the damage. Consider, for example, the recent case of attempted compromise of XZ Utils, when an unknown but possibly state-sponsored threat actor tried to infiltrate the open-source project and introduce a backdoor into a ubiquitous Linux utility.

Luckily, this attempt was not successful, but we do know how massive the potential consequences of an implanted backdoor could be – you need to look no further than Crypto AG, a Swiss cryptography provider that has posed as a front for a CIA operation for nearly 50 years. Multiple other vulnerabilities in popular open-source projects have been recognized as supply chain attacks as well: Heartbleed, Log4Shell, regreSSHion, etc. To be honest, the entire package management systems for popular languages like JavaScript or Python are currently such a mess that they can be considered huge attack vectors as well.

As a result, there seem to be a widespread opinion among not just the public, but industry experts as well, that software supply chain security is a field of cybersecurity that is entirely focused on dealing with dangerous open-source libraries and is thus primarily a responsibility of software developers. While there is definitely a grain of truth in this sentiment, it quickly becomes completely irrelevant when we try to come up with practical recommendations for organizations affected by an ongoing incident or just looking for measures to prevent a future one from happening. Most of those organizations are not directly involved in software development and simply want to be more resilient against problems caused by their suppliers.

What is software supply chain security anyway?

Software supply chain security involves managing risks associated with software acquired from third-party sources. In today's interconnected world, every organization uses third-party software, including operating systems, commercial off-the-shelf software, custom applications produced by contractors or, in some cases, even programs developed in-house. Ensuring the integrity and security of all these software components is paramount yet challenging, especially considering the confusion about the responsibilities of the multiple parties involved.

Organizations face increasing regulatory pressures, including NIS2 and DORA, which mandate constant risk management and supply chain risk assessments. These regulations require organizations to understand their entire supply chain, including indirect dependencies. Typically, end users lack in-depth knowledge about software development. They seek compliance with regulations without delving into the technical intricacies and often this can lead to costly mistakes.

Perhaps the biggest misconception about SSCS is that it falls under the responsibility of an organization’s cybersecurity team. What the CrowdStrike incident has clearly demonstrated is that having too much security can indeed be bad. Companies that were following the security best practices – deploying agents on every machine, automated deployment of patches, etc.- were in the end affected the most, having to deal with much more damage.

This reminds me again about the decade-long debate between the IT and OT security experts and people ridiculing the latter for placing process continuity and personal safety above the quick response to security breaches. Well, how the tables have turned… If the CrowdStrike incident is supposed to teach us anything, it should be that security is never the goal, but just a means for achieving better business resilience against catastrophic events and finding the right balance between security and availability should be the guiding principle for everyone.

So, how about calling it “Software Supply Chain Risk Management” instead?

The pragmatic approach

As analysts, we strive to offer practical advice to every organization. However, such advice would be substantially different for various organizations and stakeholders within them. For example, businesses with strong internal software development activities, such as CrowdStrike itself, obviously need to invest a lot into securing their entire software development lifecycle. The market nowadays offers numerous solutions ranging from universal application security testing platforms to highly specialized solutions, like the ones for managing secure artifact delivery or producing the software bill of materials.

Ever more important is to understand that the traditional view of the software development lifecycle within a single organization simply no longer reflects the reality of our interconnected world. The life of a software product does not end at the moment it is delivered to a customer – in fact, it only just begins. And since it no longer remains in the hands of one party, the responsibility must be shared properly among several stakeholders. We have figured this model out for cloud services already – why not adopt something similar for every software product?

In a sense, Software Supply Chain as a strategy, just like Zero Trust, cannot be bought off-the-shelf. It requires a combination of careful planning, changing the business processes, improving communications with your suppliers and customers and, of course, a substantial change in regulations. We are already seeing the first laws introducing stronger punishment for organizations involved in critical infrastructure, with their management facing jail time for heavy violations. Well, perhaps the very definition of “critical” must be revised to include operating systems, public cloud infrastructures, and cybersecurity platforms, considering the potential global impact of these tools on our society.

But how can end-user organizations influence these processes if they are not involved in developing the software they are using? My colleague Mike Small has already published his recommendations right after the CrowdStrike incident. To his practical advice I can only add another bit of philosophical musing: security is impossible without trust, but too much trust is even more dangerous than too little security.

Start utilizing the Zero Trust approach for every relationship with a supplier. This can be understood in various ways: from not taking any marketing claim at its face value and always seeking a neutral 3rd party opinion to very strict and formal measures like requiring a high Evaluation Assurance Level of the Common Criteria (ISO 15408) for each IT service or product you deploy. If you are looking for more information and practical advice, why not join us at the upcoming cyberevolution 2024 conference in Frankfurt this December? Software Supply Chain Security, Cyber Resilience, and NIS2 and DORA regulatory compliance will be major topics presented by industry experts.

Thursday, 01. August 2024

Spruce Systems

Sprucing Up Our Brand Identity

We have a new look, more aligned with our overall brand strategy. In this post, we'll talk more about our evolution and the creative process behind it.
Where We Began

SpruceID’s mission, since the founding of our company, is to let users control their identity and data across the web. From a very early (still accurate) SpruceID blog, “Our ultimate goal [is to] enable a future where everyone has access to a secure, private, and highly portable set of credentials and data they can take with them across the digital universe. In this future, these credentials will be inalienably yours, to use when necessary to gain access to a given area or activity.”

In the early days, we found our roots (yes, tree puns never get old) within the Web3 developer ecosystem, building a suite of open-source libraries to connect on-chain and off-chain identifiers and activity. Our early branding reflected this developer audience focus, featuring a dark mode design with futuristic graphic imagery and technical language that resonated with developers deeply embedded in the Web3 ecosystem. 

A version of our website from 2022.

We quickly learned that in addition to the tens of millions of people using cryptographic keys on Ethereum, there is another major audience actively using public-private keypairs that is already deeply entrenched in the business of issuing credentials to people – governments. 

In 2022, SpruceID won a contract with the California DMV to build out a mobile driver’s license solution and wallet application for Californians. This project underscored the importance of privacy-forward, standards-compliant verifiable digital credentials (VDCs) that can be seamlessly integrated into both public and private sector systems​. We were, and continue to be, excited and honored to collaborate with true visionaries at the California DMV who have worked tirelessly to champion the privacy and security of users in the pilot program. 

Since our initial foray into the public sector, we’ve found a strong foothold and have begun work on VDC implementation contracts with multiple state-level and national governments that are ideologically aligned with our values.

Today, we are excited to announce a significant rebrand that aligns with our expanded mission to serve not only Web3 enthusiasts but also governments and enterprises. Our updated look features lighter colors, more approachable and tangible design elements, and our messaging is crafted to be inclusive and easily understood by stakeholders at all levels.

The Evolution of Our Brand Identity

At the heart of every brand is a visual identity that resonates with its audience and communicates core values. We began our rebrand journey with the question: Who are we designing for? What do they care about? What motivates them? How do they like to learn?

Throughout this initiative, we relied heavily on research about our key audiences to ensure that all elements of our new brand (logo, colors, fonts, and tone) communicate the values most important to us and resonate with those with whom we build relationships. 

Read on to get an inside look at our creative process, led by our Sr. Designer, Scotty Matthewman.

Establishing our Brand Values

At the start of this project, we distilled our values into 5 core attributes: trustworthy, inventive, pioneering, conscientious, and secure. These values influenced every aspect of our rebranding effort, from the tone of our communication to the visual elements of our identity. 

We broke out our core values to brainstorm synonyms (below) that might spark visual reactions within our audience, and allow those experiencing our brand to feel heard and served by the solutions we offer. We ultimately aimed to create a brand that not only comes across as professional and reliable but also feels inclusive and innovative.

Bringing Our New Logo To Life

The evolution of our logo began with a focus on our core mission: empowering people to have greater control over their personal data. We explored many different iterations and directions, drawing inspiration from our design values while also trying to capture elements of identity, innovation, and security.

We liked the idea of many small elements, representing that people have many facets of their identity.

This direction felt representative of a few elements that resonated with us:

With a version we felt excited about, we decided to validate our hypothesis with real people within our key audiences. We surveyed a group of users to share feedback and preferences among three different logo iterations (see survey results below). 

With positive user feedback on the logo direction, we shifted our focus to defining our brand colors, fonts, and imagery.

Color Palette: Balancing Trust and Innovation

Choosing the right color palette was important in setting the tone for our brand. We wanted a color that conveyed trust and safety, which is traditionally represented by blue. However, we also wanted to avoid the commonly used vibrant blue, so our solution was a muted blue with a slight lean towards purple, creating a balance that feels trustworthy, innovative and approachable.

The primary colors we landed on for our new brand are ‘Spruce’ blue, warm white, and black, complemented by warm neutrals with splashes of purple and green for vibrancy. This combination helps us stand out while maintaining a professional, inviting, and authentic appearance that resonates with our audience.

Font Choices: Merging Tradition with Modernity

In selecting our fonts, we aimed to balance modernity and readability. Initially, we experimented with sans-serif fonts, which are notoriously clear and accessible. We wanted something that was not extremely emotive or overly playful, but also that would allow us to be memorable and unique.

Our final choice includes Switzer for body text, known for its versatility and legibility, and Garamond for headers—a serif font that is also easily legible and adds a touch of traditional elegance. These design choices emphasize our commitment to humanizing technology.

Visual Assets: Clarity and Functionality

Our visual assets, including mockups, photos, and vector illustrations, play an important role in communicating complex technical concepts to those with varying levels of technical expertise. We prioritize real-world mockups over abstract representations, ensuring our visuals are clear and directly tied to our message. 

This approach aligns with our value of inclusivity, making sure our content is accessible and understandable to all audiences.

A New Chapter for the SpruceID Brand

Evolving our look and feel as a company has allowed us to align more closely with our core values, while making a very technical industry less abstract and more approachable. As we continue to grow and innovate, our brand will remain an important tool in our journey to empower users and drive the modernization of digital identity. This is just the beginning of a new and exciting chapter for us.

If you want to see our new brand identity in practice, check out our newly redesigned website, spruceid.com.


Thales Group

Thales Chosen as Supplier for Lilium's Revolutionary eVTOL Jet Program

Thales Chosen as Supplier for Lilium's Revolutionary eVTOL Jet Program prezly Thu, 08/01/2024 - 14:00 Thales, a global leader in airborne communication and navigation solutions, formerly known as Cobham Aerospace Communications, has been selected by Lilium as the sole supplier of navigation and communication antennas for their ground-breaking ​ electric Vertical Takeoff and Landin
Thales Chosen as Supplier for Lilium's Revolutionary eVTOL Jet Program prezly Thu, 08/01/2024 - 14:00 Thales, a global leader in airborne communication and navigation solutions, formerly known as Cobham Aerospace Communications, has been selected by Lilium as the sole supplier of navigation and communication antennas for their ground-breaking ​ electric Vertical Takeoff and Landing (eVTOL) jet program. Lilium is a leading electric aircraft manufacturer, aiming to revolutionize Regional Air Mobility (RAM) and drastically reduce carbon emissions with the Lilium Jet, targeted for entry into service in 2026. With Thales aerospace Communications solutions onboard, Lilium's vision takes another leap forward towards a sustainable and efficient aerial transportation future.

 

 

This partnership between Thales and Lilium is a pivotal moment in the aviation industry, merging cutting-edge technology with decades of expertise. The planned shipset for Lilium's eVTOL fleet will include an optimal blend of Commercial Off-The-Shelf (COTS) and bespoke antennas. These antennas include VHF Com + GPS, Wi-Fi/LTE, ELT, GNSS, DME, VOR, LOC, Glideslope, UAT, and Transponder, ensuring seamless communication and navigation capabilities for Lilium's innovative aircraft.

 

With over 50 years of industry-leading experience, Thales is renowned for excellence in airborne antenna manufacturing. Their unwavering commitment to quality, coupled with an unparalleled understanding of fixed-wing and helicopter platforms, uniquely positions the Company to tackle the challenges inherent in eVTOL aircraft design. From minimizing drag and weight to optimizing antenna placement, Thales expertise will be instrumental in the success of Lilium's ambitious eVTOL program while advancing sustainability goals.

 

As Lilium's eVTOL program continues to reach to new heights, the world eagerly anticipates the transformative impact it will have on the future of aviation.

“Quality and experience are key factors in choosing the right supplier for each component for the Lilium Jet. We are happy that we are working with Thales Aerospace Communications who are producing high quality navigation and communication antennas for our Lilium Jets and have decades of experiences in the field”. Martin Schuebel, Senior Vice President of Procurement at Lilium.

"The Regional Air Mobility segment presents a unique opportunity to assist eVTOL aircraft manufacturers as they seek to overcome the cost and time challenges of metropolitan air transportation. We have greatly enjoyed collaborating with Lilium and learning about their Regional Air Mobility challenges, and we believe this experience will enhance our position as a trusted supplier to this domain.” Nicolas Bonleux, Vice-President, Thales Aerospace Communications.

/sites/default/files/prezly/images/Design%20sans%20titre%20%2820%29.png Documents [Prezly] Thales Chosen as Supplier for Lilium's Revolutionary eVTOL Jet Program.pdf Contacts Alice Pruvot, Head of Media Relations, Aeronautics & Defense 01 Aug 2024 Type Press release Structure Aerospace This partnership between Thales and Lilium is a pivotal moment in the aviation industry, merging cutting-edge technology with decades of expertise. The planned shipset for Lilium's eVTOL fleet will include an optimal blend of Commercial Off-The-Shelf (COTS) and bespoke antennas. These antennas include VHF Com + GPS, Wi-Fi/LTE, ELT, GNSS, DME, VOR, LOC, Glideslope, UAT, and Transponder, ensuring seamless communication and navigation capabilities for Lilium's innovative aircraft. prezly_678136_thumbnail.jpg Hide from search engines Off Prezly ID 678136 Prezly UUID 86a469c0-5230-44fd-9b14-df23612e1059 Prezly url https://thales-group.prezly.com/thales-chosen-as-supplier-for-liliums-revolutionary-evtol-jet-program Thu, 08/01/2024 - 16:00 Don’t overwrite with Prezly data Off

Dock

Dock Launches Privacy-Preserving Credential Monetization

Zug, Switzerland – August 1st, 2024 – Dock announced today the launch of its Privacy-Preserving Credential Monetization feature within its Decentralized ID platform. This cutting-edge innovation enables organizations to generate new revenue streams by charging for the verification of Digital ID credentials that they issue.  With this advanced feature,

Zug, Switzerland – August 1st, 2024 – Dock announced today the launch of its Privacy-Preserving Credential Monetization feature within its Decentralized ID platform. This cutting-edge innovation enables organizations to generate new revenue streams by charging for the verification of Digital ID credentials that they issue. 

With this advanced feature, Dock's platform sets a new industry standard, empowering organizations to launch an ID Ecosystem for their partners to securely share and monetize verifiable credentials. This accelerates onboarding processes, enhances transaction speeds, and improves business efficiencies. Importantly, user privacy is protected, as issuers and ecosystem administrators cannot identify which specific user or credential has been verified.

Traditionally, issuing ID credentials has been a cost burden for issuers who make the investment to ensure credentials contain high-quality information. However, with Dock's new feature, ID companies can transform this expense into a new revenue stream by charging for credential verifications. Credentials are part of an ecosystem where verifiers must pay a price for each verification, making it easier for issuers to generate revenue from credential issuance. This innovation enhances the economic viability for all stakeholders within a Digital ID Ecosystem.

Dock's technology uses Keyed Verification Anonymous Credential (KVAC) cryptography to ensure that credentials can only be verified by members of an ecosystem with a billing relationship.

Privacy-Preserving Monetization

At Dock, privacy is our priority. Our Credential Monetization feature ensures that ecosystem administrators can track paid verifications but cannot identify which specific user or credential has been verified, preserving confidentiality and fostering a trust-rich environment. Users must give explicit consent for each verification, maintaining their control over their digital identity.

“In the past, a lack of ways in which organizations can generate revenue from verifiable credentials has significantly constrained adoption of this amazing technology. With the release of this new feature, clearly defined business models are now integrated into Dock’s issuance and verification platform enabling entities to roll out new products at scale.” said Nick Lambert, Dock’s CEO.

About Dock

Dock’s Decentralized Identity platform enables companies to turn verified ID data into trusted Reusable Digital ID Credentials, instantly verify their authenticity and get paid when they are verified by third parties. It comprises an API, a Web App, an ID wallet and a dedicated blockchain. Dock has been a leader in decentralized digital identity technology since 2017 and trusted by organizations in diverse sectors, including healthcare, finance, and education.


Ontology

Ontology Monthly Report — July

Ontology Monthly Report — July July has been a month full of advancements, strategic partnerships, and vibrant community engagement at Ontology. Here’s a look back at the significant events and developments that shaped this exciting month: Community and Web3 Influence 🌐🤝 Ontology x KIMA Campaign Winners: We’ve announced the winners of the Ontology x KIMA campaign, celebrating community pa
Ontology Monthly Report — July

July has been a month full of advancements, strategic partnerships, and vibrant community engagement at Ontology. Here’s a look back at the significant events and developments that shaped this exciting month:

Community and Web3 Influence 🌐🤝 Ontology x KIMA Campaign Winners: We’ve announced the winners of the Ontology x KIMA campaign, celebrating community participation and engagement. Stake2Earn Campaign: Our Stake2Earn campaign is nearing its end. Don’t miss your last chance to join and earn rewards! ETHCC Participation: Ontology had a productive panel discussion at ETHCC, partnering with MPost during their Hack Seasons event. New Content Released: We published a new insightful article this month, adding to our growing library of resources on blockchain technology. Development/Corporate Updates 🔧 Development Milestones 🎯 Ontology EVM Trace Trading Function: We have successfully completed the development of the Ontology EVM Trace Trading Function, now fully operational at 100%. ONT to ONTD Conversion Contract: Progress on this front has reached 70%, making the conversion process smoother for our users. ONT Leverage Staking Design: Our staking design has achieved 70% completion, soon to offer innovative and flexible staking options. Events and Partnerships 🤝 New Partnerships: This month, we established a significant partnership with HeLa Labs, aiming to enhance our technological offerings and reach. Exciting Giveaways: We conducted multiple giveaways, including collaborations with DXSALE and X World Games, to reward and engage our community. ONTO Wallet Developments 🌐🛍️ Upcoming ONTO Version: Stay tuned for the upcoming new version of ONTO, promising enhanced features and improved user experience. Partnership with Atleta Network: ONTO has partnered with Atleta network for a new campaign, expanding our community activities. Node Creation Feature: Users can now create and manage nodes directly within the ONTO app, simplifying node operations. New Listings and Giveaways: ONTO listed $BIAO and hosted a giveaway of 5000 ARB with Fiat24, providing more opportunities for user participation. On-Chain Metrics 📊 dApp Ecosystem Growth: The total number of dApps on our MainNet holds steady at 177, reflecting a vibrant and thriving ecosystem. Transaction Increases: This month, we observed an increase of 5,945 dApp-related transactions and 10,519 MainNet transactions, indicating robust network activity and utilization. Community Engagement 💬 Vibrant Community Discussions: Our social platforms have been abuzz with lively discussions and insights, driven by the passion and enthusiasm of our community members. Recognition through NFTs: Active community members were celebrated with the issuance of NFTs, acknowledging their contributions and engagement. Follow us on social media 📱

Keep up with Ontology by following us on our social media channels. Your continued support and engagement are vital to our shared success in the evolving world of blockchain and decentralized technologies.

Ontology website / ONTO website / OWallet (GitHub)

Twitter / Reddit / Facebook / LinkedIn / YouTube / NaverBlog / Forklog

Telegram Announcement / Telegram English / GitHubDiscord

Thank you for your unwavering support and participation this month. As we move forward, we are excited about the opportunities that lie ahead and are committed to delivering groundbreaking solutions and fostering a more inclusive and secure digital future. Let’s continue to innovate and grow together!

Ontology Monthly Report — July was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


Holochain

hApps Spotlight: Relay

“We Needed This.”

There is a Holochain app on mobile phones. 

Shipping this fall, Volla Phone’s new Quintus model will have two Holochain apps preloaded on it. One of these is Relay. On its face, Relay is a simple chat app. But its impact is much deeper than that. Like Signal, Relay is fully encrypted. But unlike the industry standard for secure communication, Relay doesn’t use central servers, adding an additional layer of security and privacy. Relay also doesn’t need your phone number as it addresses its messages directly to your public key which acts as a decentralized digital identifier. Not only does Relay come preinstalled on the Quintus, but it will also be available for Windows, MacOS, Linux, and all Volla devices including the Volla Tablet

But how did we get here? There have been calls for Holochain to be on mobile for awhile, but it took a set of synergistic needs to make it happen. Volla needed an alternative to the big cloud providers. darksoil studio needed a mobile version of Holochain. And the world needs open source tech. 

No project, no technical development, springs up from thin air. It’s rather a process of many small steps and connections that come together to realize new possibilities.  

To tell the story of Relay, we reached out to the people involved to give a full view into this process.

Volla

It was just over a year ago that community dev Hedayat Abedijoo connected with Dr. Wurzer, bringing Nick Stebbings with him to the Volla Community Days where they demoed Holochain and made the first attempts at developing on mobile. Building on the fantastic early work of Nick and Hedayat, Holochain has been growing ties between Volla and our community since. Here is what Volla founder Dr. Wurzer has to say about their choice to integrate Holochain into their products:

“The big picture of Volla is a secure and independent communication infrastructure. A smartphone is an elementary component. The cloud is another important element. The only way to prevent external influence is distributed, highly encrypted edge computing. As soon as we process or manage user data, access could be forced or our service sanctioned.” —Dr. Wurzer, founder of Volla Phone

Volla’s respect for the privacy of their customers really sets them apart in the smartphone market. Following up on the above, we asked Dr. Wurzer what most excited him about the growing partnership with Holochain.

“The message of Volla is freedom through simplicity and security. Simplicity in the sense of convenience. And this convenience also includes the cloud. That's why Apple is so popular with the iCloud and why Google also has this offering. Together with Holochain, we can now tackle the mass market with high-performance hardware that can compete with an iPhone. If we manage to reach the mass market, we will give back privacy, security and self-determination to many consumers who are overwhelmed by technical issues and trends. We bring the power back to the people. It's not just protected communication, but also free access to information, which — I can only speak for Germany and Europe — is already restricted.” —Dr. Wurzer, founder of Volla Phone
Development

To develop Relay, Holochain brought in Aaron Brodeur and Tibet Sprague of Terran Collective along with our very own Eric Harris-Braun. Here is what they have to say about the development process:

“There was a steep learning curve at the beginning, not only deepening my understanding of  Holochain development, but also working with a stack that is pretty new to me: SvelteKit, Tailwind, Skeleton, and of course the P2P Shipyard code that allows it to run on Android using Tauri Mobile. All in all things have gone quite well for such a cutting edge project. The biggest challenge in the Holochain universe is figuring out what bugs are coming from my code and what might be coming from Holochain itself or from the experimental Shipyard code, and keeping up with the many moving pieces. Not to mention also working with a fairly new and evolving platform in Volla Phone and Volla OS.” —Tibet Sprague of Terran Collective
“It's been a huge joy to work with Dr. Wurzer. He's a charming and brilliant entrepreneur with incredible attention to detail, but also a big vision of a phone free from entanglement with the big silicon valley services, allowing people to connect with one another safely and securely. The power of Holochain is obvious to him, and so it was consequently really easy to talk through and work around Holochain's unique affordances — the tradeoffs are worth it!” —Aaron Brodeur of Terran Collective

What was the moment Relay came to life for you?

“The moment we got it running on a Volla Phone for the first time, and Eric and I were successfully able to chat using it was incredible! P2P chat on big tech free phones is such a massively exciting accomplishment and we are going to pull it off.” —Tibet Sprague of Terran Collective
“I was on zoom with Eric Harris-Braun when he first showed me Relay working on the phone. There were many moments early on where we were not 100% sure what we wanted could work on the phone. None of this has ever been tried before! I know on the one hand it's just an app, and at that moment it was just a few lines of text on an empty screen... but I definitely felt like I was witnessing – and contributing to — a historic moment!” —Aaron Brodeur of Terran Collective

What contribution to the Relay app are you most proud of?

“I'm most proud of the brand. I wanted to make something that was adjacent to and compatible with Volla's brand, but distinct. It doesn't often work out like this, but it was the first name and logo I proposed. I love the name because it speaks to how the data is gossiped around within a group – each member of the group is relaying messages on behalf of the other members of the group. The icon is a network diagram in the shape of an R.” —Aaron Brodeur of Terran Collective
p2p Shipyard by darksoil studio

Of course, none of this would have been possible if the hurdles to Holochain working on mobile weren’t solved. The highest applause goes to the team at darksoil studio who did the heavy lifting of building a Holochain plugin for Tauri so that Holochain apps can be deployed to all the platforms Tauri supports: Linux, Windows, MacOS, Android, and soon iOS (on the way). The p2p Shipyard gives developers an easy method for converting their Holochain applications to a diversity of platforms. For the mobile context Holochain is set to its “zero-arc” configuration where mobile nodes don’t have to hold a portion of the DHT like a normal Holochain node would. This saves on battery life and helps the application meet app store requirements. (Volla is using a variation that actually does hold full nodes thanks to their custom designed OS which enables tighter integration with Holochain.) 

p2p Shipyard was the key development that made Relay and our work with Volla really successful, but its potential affects the whole ecosystem. So let’s dig a bit deeper into their journey: 

Can you tell us about the experience of making Holochain mobile ready?

“It’s been a long road. We have been wanting Holochain on mobile for a long time, and ultimately, we needed it badly enough for ourselves that we went ahead and did it. The p2p Shipyard is our second tool to enable Holochain to work on mobile. Our first one, which used firebase, we knew wouldn’t work long term, but we needed to test our app with users, so we went ahead and did it and ultimately, the learnings from that process contributed to the p2p Shipyard. It’s been a challenging and empowering experience, and it’s not done yet! ” —Eric Bear of darksoil studio

How do you see P2P Shipyard growing in the next year?

“Over the next year we hope to see the p2p Shipyard get put to use! We’re hoping to see a number of Holochain apps ship and function across platforms over this next year, and we’re already working with a few projects in the Holochain ecosystem to help them get their hApps into people’s hands (literally). ” —Eric Bear of darksoil studio
Get Involved

To be one of the first people using Relay, you can support Volla through their Kickstarter campaign where they are fundraising for their initial production run of the Quintus. They hit their funding goal in the first 3 hours, but thankfully you can still get a phone from them. 

We don’t know when Relay will be more widely available in app stores, but we expect that to be in the works.

And as for p2p Shipyard, they have a wonderfully innovative funding model which we hope to see more of as it models open source ethics and sustainable business practices all in one. 

Support p2p Shipyard

darksoil wants open source development to be more sustainable, so with the p2p Shipyard, they’re using an experimental funding model called retroactive crowdfunding.

Basically, they went and built the software, and then are funding after the fact. Once their retroactive crowdfunding goal of $100k is met, the p2p Shipyard will be free and open source forever.

Currently, the p2p Shipyard is source-available, so the code is publicly visible to audit, but a license is needed to use it. 

During the source-available phase, anyone interested in using the p2p Shipyard can reach out to them for a license; and all license fees will go towards the retroactive crowdfunding goal.

Once they meet the goal, and the p2p Shipyard is open source, they will continue to offer support services to maintain, improve, and adapt the p2p Shipyard to meet more people’s needs.

darksoil welcomes anyone invested in the Holochain ecosystem to support this infrastructure with a donation.


Tokeny Solutions

Transaction Privacy: The Last Blocker for Massive Open Finance Adoption

The post Transaction Privacy: The Last Blocker for Massive Open Finance Adoption appeared first on Tokeny.
July 2024 Transaction Privacy: The Last Blocker for Massive Open Finance Adoption

Open finance is a new approach to financial services, characterized by decentralization, open access, and innovation. It refers to financial activities and transactions that occur through decentralized infrastructures like blockchain networks. It enables transparency, speed, and interoperability, delivering exceptional efficiency and transforming the way financial systems operate.

The need to bring privacy to open finance

While transparency and reduced information asymmetry can greatly make finance more inclusive and better, not all data can be made public. Sensitive information, such as identities and confidential asset data must remain confidential. The challenge is to balance privacy with the need for transparency. The solution lies in sharing status credentials, such as confirming that a user is KYC-checked or that an asset is ESG-compliant, without exposing detailed data onchain.

Leverage status to ensure data privacy with ERC-3643

ERC-3643 offers a solution for maintaining data privacy by focusing on status verification. Instead of making actual asset or investor data public onchain, only the status credentials, so-called “claims” (e.g., ESG certificate proof, KYC check proof), are shared. Smart contracts can check these claims to confirm status (e.g., whether KYC proof has been obtained). This approach prevents the need for data reconciliation and avoids public exposure of sensitive information.

Addressing Transaction Privacy

Despite these advances, transaction and balance privacy remain concerns when using public blockchains. Large institutions want to keep their balance and transaction data private. Though stablecoin regulations are getting clearer (e.g. European MiCA regulation), the need to hide trade prices to avoid market panic deters institutions from using onchain cash for tokenized securities. This makes atomic settlement unrealistic as the onchain cash leg is not used.

To tackle this, innovative solutions like Fully Homomorphic Encryption (FHE) and Zero-Knowledge Proof (ZKP) have emerged. At Tokeny, we are closely working with privacy solution providers who are building solutions with these cutting-edge technologies to address these issues.

It is just a matter of time before these innovations make transaction privacy a reality in open finance. Stay tuned as we continue to bring you updates on these exciting developments.

Tokeny Spotlight

WEBINAR

Learn more about ABN AMRO’s succesful tokenized green bond.

Watch Here

ETHCC

CTO, Tony Malghem, and head of blockchain, Joachim Lebrun, spoke at EthCC.

Read More

PARTNERSHIP

Hex Trust and Tokeny enhance tokenized securities with top-notch security.

Read More

PANEL

Our CEO, Luc Falempin, joined a panel hosted by Zama at their EthCC side event.

Read More

PRODUCT NEWSLETTER

We dive into our journey to becoming the leading onchain finance operation system.

Read More

Podcast

Our CEO, Luc Falempin, joined Adi Ben-Ari, CEO of Applied Blockchain in their podcast.

Listen Here Tokeny Events

Tokeny’s Teambuilding Retreat Amsterdam ☀️
August  26th-28th, 2024 | 🇳🇱 The Netherlands

RWA Summit Singapore
September  17th, 2024 | 🇸🇬 Singapore

Register Now

European Blockchain Convention
September  25th-26th, 2024 | 🇪🇸 Spain

Register Now

Token Europe
September  18th-19th, 2024 | 🇧🇪 Belgium

Register Now ERC3643 Association Recap

Featured by JP Morgan

The ERC-3643 standard gained recognition in the recent J.P. Morgan report, “Emerging Technology Trends: JPMorganChase Perspective.”

Read more

Member Podcast 

Dennis O’Connell joined an exclusive webinar with Applied Blockchain on the topic “Adding Privacy & Trust to ERC3643 Tokens.”

Watch here

Subscribe Newsletter

A monthly newsletter designed to give you an overview of the key developments across the asset tokenization industry.

Previous Newsletter  Aug1 Transaction Privacy: The Last Blocker for Massive Open Finance Adoption July 2024 Transaction Privacy: The Last Blocker for Massive Open Finance Adoption Open finance is a new approach to financial services, characterized by decentralization, open… Jun28 Tokenized Securities Unaffected by MiCA, Utility Tokens and Stablecoins Face Stricter Rules June 2024 Tokenized Securities Unaffected by MiCA, Utility Tokens and Stablecoins Face Stricter Rules As the EU’s Markets in Crypto Assets (MiCA) regulation is set… May22 Institutional RWA Tokenization Needs Permissioned Cash Coins May 2024 Institutional RWA Tokenization Needs Permissioned Cash Coins Stablecoins are the killer use cases for the crypto space, with a market cap exceeding $160… Apr25 BlackRock’s Influence and the Future of MMFs April 2024 BlackRock’s Influence and the Future of MMFs In the world of finance, innovation acceleration often requires the endorsement of industry giants. BlackRock’s embrace…

The post Transaction Privacy: The Last Blocker for Massive Open Finance Adoption first appeared on Tokeny.

The post Transaction Privacy: The Last Blocker for Massive Open Finance Adoption appeared first on Tokeny.

Wednesday, 31. July 2024

TBD on Dev.to

Simplifying Cross-Platform Payments with DAPs

"Dap me up!" is a colloquial term followed by a gesture used in Western cultures to greet people or express solidarity. At TBD, we're adding a new meaning to this phrase with Decentralized Agnostic Paytags (DAPs), an open source approach designed to simplify peer-to-peer payments across various applications. Solving an Awkward Issue Peer-to-peer (P2P) payment applications have exis

"Dap me up!" is a colloquial term followed by a gesture used in Western cultures to greet people or express solidarity. At TBD, we're adding a new meaning to this phrase with Decentralized Agnostic Paytags (DAPs), an open source approach designed to simplify peer-to-peer payments across various applications.

Solving an Awkward Issue

Peer-to-peer (P2P) payment applications have existed since the late '90s, starting with tools like PayPal. With the rise of smartphones, innovative mobile apps like Venmo, Zelle, and Block's very own Cash App have made it easier to exchange funds directly from our phones.

However, a persistent issue remains: the sender and recipient must use the same app to complete a transaction. People have personal and valid reasons for choosing their preferred payment apps.

This situation creates an uncomfortable, unspoken battle when you need to pay a friend after dinner or a contractor for a service, only to discover that you use CashApp while they use Venmo. Now, you both face the dilemma of deciding who will download a new app, set up a new account, and link it to their bank account.

Instead, P2P payment apps can use DAPs—agnostic unique identifiers stored in a registry—to identify and route payments to the correct destination across different platforms. This allows you and the recipient to financially "dap each other up" regardless of which apps you prefer.

Introducing Decentralized Agnostic Paytags (DAPs)

A DAP is a user-friendly handle for payments, structured as @local-handle/domain.

Here's an example: I love the handle blackgirlbytes. If I registered that handle on Cash App's DAP registry, my DAP would be @blackgirlbytes/cash.app. Similarly, if I registered that handle on DIDPay's DAP registry, my handle would be @blackgirlbytes/didpay.me.

Each DAP links to a Decentralized Identifier (DID) to help identify who you are, regardless of the platform. While your DID includes cryptographic keys for identity protection, it also contains your money address—a unique identifier that directs different payment systems where to send your funds.

Get Started with DAPs

The DAP ecosystem has two key actors: the payment platform that offers DAPs and the users who own the DAPs.

For Organizations: Any organization can enable users to create a DAP on their platform by setting up a DAP registry associated with their domain. This registry serves two main functions:

It allows users to sign up for DAPs. It maps users' DAPs with their DID and money address.

For Users: Once a DAP registry is available on your preferred platform, you can sign up for a DAP using your chosen handle.

If you're eager to experiment with DAPs but your preferred payment platform hasn't implemented a DAPs registry yet, you can obtain a DAP via our static DAP registry.

Keep Up to Date

DAPs debuted during a company-wide Hackathon at Block, where TBD, Cash App, and Square teams collaborated to bring this vision to life. As the DAP implementation continues to evolve, here are a few ways you can stay involved:

Join the TBD Discord Read the DAP specification Contribute to the open source SDKs: dap-js dap-go dap-kt dap-dart Create a DAP in our static DAP registry

Watch the video below to learn more


liminal (was OWI)

Link Index for Account Takeover Prevention in Banking

The post Link Index for Account Takeover Prevention in Banking appeared first on Liminal.co.

Thales Group

Thales to enhance mobile threat simulators for German forces

Thales to enhance mobile threat simulators for German forces prezly Wed, 07/31/2024 - 11:00 Thales has been entrusted with the upkeep of critical components and peripherals for the mobile threat simulators (Mobs). These simulators are instrumental in training German forces aircrews to counteract ground-to-air missile threats in authentic combat scenarios. This three-year contra
Thales to enhance mobile threat simulators for German forces prezly Wed, 07/31/2024 - 11:00 Thales has been entrusted with the upkeep of critical components and peripherals for the mobile threat simulators (Mobs). These simulators are instrumental in training German forces aircrews to counteract ground-to-air missile threats in authentic combat scenarios. This three-year contract is the result of a strong partnership and trust constructed with the Federal Office of Bundeswehr Equipment, Information Technology and In-Service Support (BAAINBw). This collaboration is a cornerstone of the Polygone program, a tri-national initiative aiming at preparing the German forces. Under the Polygone program, a comprehensive data bank has been established, incorporating both aircraft metrics and Thales high-tech data systems. This integration provides the crews with a robust analysis and precise debriefing of the use and efficiency of their aircraft’s self-protection equipment.
SA-8 air defence missile system © Thales

Through its branch in Koblenz, Thales Deutschland has been awarded a three-year contract with the BAAINBw for the maintenance of components of the mobile threat simulators and their peripherals as part of the Polygone programme.

It is a continuation of the long-term partnership for services in the fields of customer-oriented advising, predictive maintenance, obsolescence management, repairs, exercise and training and weapon systems. Thales site in Koblenz has maintained these systems for 15 years and will further expand its services for customers and continue its development.

“Simulation plays a key role in handling, procedural, behavioural and communication training in all operational areas and at all operational levels. Mastering modern aircraft in complex threat scenarios requires high quality training. We are thrilled to be able to continue supporting the German Armed Forces in the Polygone programme." highlights Christoph Ruffner, CEO of Thales Germany.

 

/sites/default/files/prezly/images/Photo%20polygone.jpg Documents [Prezly] 20240731_PR_Thales to enhance mobile threat simulators for German forces.pdf Contacts Camille Heck, Thales, Media Relations Land & Naval Defence Alice Pruvot, Head of Media Relations, Aeronautics & Defense 31 Jul 2024 Type Press release Structure Defence and Security Defence Through its branch in Koblenz, Thales Deutschland has been awarded a three-year contract with the BAAINBw for the maintenance of components of the mobile threat simulators and their peripherals as part of the Polygone programme. prezly_678052_thumbnail.jpg Hide from search engines Off Prezly ID 678052 Prezly UUID b8a3a0a4-ae50-4b43-8815-b75980ac39be Prezly url https://thales-group.prezly.com/thales-to-enhance-mobile-threat-simulators-for-german-forces Wed, 07/31/2024 - 13:00 Don’t overwrite with Prezly data Off

Tokeny Solutions

Hex Trust and Tokeny Partner to Accelerate Institutional RWA Tokenization

The post Hex Trust and Tokeny Partner to Accelerate Institutional RWA Tokenization appeared first on Tokeny.

HONG KONG, 31th July 2024 – Hex Trust, a leading provider of digital asset solutions for institutional finance, protocols, foundations, and the Web3 ecosystem, has announced a strategic partnership with Tokeny, the pioneering onchain operating system for tokenized securities. Tokeny, with the custody integrations provided by Hex Trust, will deliver a streamlined tokenization process enhanced with fortified security measures for operations signings. 

Custody solution providers are essential for institutions venturing into real-world asset (RWA) tokenization. Through this partnership, Hex Trust’s multi-wallet architecture has been integrated into the Tokeny platform, enabling institutions already utilizing Hex Trust’s custody services to seamlessly manage their tokenized securities and securely sign operations using the same trusted wallet.

Tokeny’s platform offers an intuitive white-label platform and plug-and-play APIs that enable businesses and institutions to compliantly issue, transfer, and manage tokenized RWAs and securities. Tokeny has a fully integrated ecosystem, which includes necessary onchain and off-chain service providers required for seamless operations onchain. 

As large institutions embrace RWA tokenization, we’ve seen a surging demand from our clients for solutions that aggregate everything they need to operate onchain seamlessly. Tokeny provides exactly this with a proven track record. By integrating Hex Trust's custodial wallet infrastructure into the Tokeny platform, customers can start tokenizing and managing their onchain securities with the wallet they trust and are familiar with. Alessio QuagliniCEO & Co-Founder of Hex Trust

Compliance and interoperability are key for tokenization projects to thrive, and Tokeny’s solutions ensure this by being fully compatible with the open-source identity-based ERC-3643 permissioned token standard. Token restrictions on investors and operations can be easily set and embedded into the token through Tokeny’s token compliance setup interface or APIs. While tokens can leave the operating platform to interact with any other application by default, only qualified investors can interact with them, and custom additional rules can be applied.

Custody of security tokens is about token issuers and their agents securely preparing and signing advanced smart contract operations. This process requires using a proper orchestration platform with guided business workflows and then signing the corresponding blockchain transactions via a secure wallet. By combining the technologies of Tokeny and Hex Trust, we offer institutions a comprehensive solution, enabling tokenization without the burden of technological concerns. Luc FalempinCEO Tokeny About Hex Trust

Established in 2018, Hex Trust is a fully licensed digital asset custodian dedicated to providing solutions for protocols, foundations, financial institutions, and the Web3 ecosystem. Get access to custody, DeFi, brokerage, and other services built on regulated infrastructure.

For more information, visit hextrust.com or follow Hex Trust on LinkedIn, Twitter and Telegram.

About Tokeny

Tokeny is a leading onchain finance operating system. Tokeny has pioneered compliant tokenization with the open-source ERC-3643 standard and advanced white-label software solutions. The enterprise-grade platform and APIs unify fragmented onchain and offchain workflows, integrating essential services to eliminate silos. It enables seamless issuance, transfer, and management of tokenized securities. By automating operations, offering innovative onchain services, and connecting with any desired distributors, Tokeny helps financial actors attract more clients and improve liquidity. Trusted globally, Tokeny has successfully executed over 120 use cases across five continents and facilitated 3 billion onchain transactions and operations.

Website | LinkedIn | X/Twitter

The post Hex Trust and Tokeny Partner to Accelerate Institutional RWA Tokenization first appeared on Tokeny.

The post Hex Trust and Tokeny Partner to Accelerate Institutional RWA Tokenization appeared first on Tokeny.


Aergo

Hard Fork Timeline Update

We are announcing a rescheduling of our much-anticipated hard fork, initially set for July 2024. This decision highlights our dedication to providing a secure, compliant, and feature-rich platform. Our adjusted timeline now aims for August 2024. The new specifications and features of the hard fork will include the following: Prioritizing Security and Compliance To support various use cases like

We are announcing a rescheduling of our much-anticipated hard fork, initially set for July 2024. This decision highlights our dedication to providing a secure, compliant, and feature-rich platform. Our adjusted timeline now aims for August 2024. The new specifications and features of the hard fork will include the following:

Prioritizing Security and Compliance

To support various use cases like Security Token Offerings (STOs), we are integrating new compliance-related features into the mainnet. This includes functionalities such as contract whitelists and blacklists, designed to meet stringent regulatory requirements and enhance overall security.

Integrating Composable Transactions

One of the most exciting features we are adding is support for composable transactions. This enhancement, detailed in our Composable Transactions documentation, enables more intelligent blockchain use cases, including those leveraging machine learning(ML). Users will be able to call contracts using plain text, simplifying interactions. This transparency in on-chain smart contract usage and management will significantly streamline operations and improve usability.

Text-Based Smart Contract Deployment

We are also introducing a new feature for deploying smart contracts using plain text, as outlined in our GitHub pull request. By managing smart contract source code directly on-chain, we can better support intelligent blockchain use cases like those involving ML. This approach guarantees the generation of deterministic bytecode, thereby enhancing security. On-chain management of smart contract source code ensures transparency and reliability, which are crucial for advanced blockchain applications.

Developing On-Chain ML Models

Our team is developing ML models that can be used directly on our mainnet alongside smart contracts. Unlike large language models like ChatGPT, which require substantial resources and are not specifically designed for blockchain integration, we are benchmarking smaller models like Microsoft’s Phi-3. These models are optimized for on-chain use, ensuring they are efficient and suitable for enterprise and mainnet applications. This research aims to enable seamless integration of intelligent features within our blockchain environment. We will provide more details about these ML models soon, as they will play a crucial role in enhancing the Aergo platform.

Looking Ahead

While the delay may be disappointing, we will provide regular updates as we progress toward the new timeline. We focus on delivering a blockchain platform with the highest performance and security standards.

We appreciate your understanding and continued support as we work diligently to bring these significant enhancements to life. Stay tuned for more updates, and thank you for being a vital part of our community.

Hard Fork Timeline Update was originally published in Aergo blog on Medium, where people are continuing the conversation by highlighting and responding to this story.


TBD

Simplifying Cross-Platform Payments with DAPs

Introducing Decentralized Agnostic Paytags, a universal money address tied to Decentralized Identifiers

"Dap me up!" is a colloquial term followed by a gesture used in Western cultures to greet people or express solidarity. At TBD, we're adding a new meaning to this phrase with Decentralized Agnostic Paytags (DAPs), an open source approach designed to simplify peer-to-peer payments across various applications.

Solving an Awkward Issue

Peer-to-peer (P2P) payment applications have existed since the late '90s, starting with tools like PayPal. With the rise of smartphones, innovative mobile apps like Venmo, Zelle, and Block's very own Cash App have made it easier to exchange funds directly from our phones.

However, a persistent issue remains: the sender and recipient must use the same app to complete a transaction. People have personal and valid reasons for choosing their preferred payment apps.

This situation creates an uncomfortable, unspoken battle when you need to pay a friend after dinner or a contractor for a service, only to discover that you use CashApp while they use Venmo. Now, you both face the dilemma of deciding who will download a new app, set up a new account, and link it to their bank account.

Instead, P2P payment apps can use DAPs—agnostic unique identifiers stored in a registry—to identify and route payments to the correct destination across different platforms. This allows you and the recipient to financially "dap each other up" regardless of which apps you prefer.

Introducing Decentralized Agnostic Paytags (DAPs)

A DAP is a user-friendly handle for payments, structured as @local-handle/domain.

Here's an example: I love the handle blackgirlbytes. If I registered that handle on Cash App's DAP registry, my DAP would be @blackgirlbytes/cash.app. Similarly, if I registered that handle on DIDPay's DAP registry, my handle would be @blackgirlbytes/didpay.me.

Each DAP links to a Decentralized Identifier (DID) to help identify who you are, regardless of the platform. While your DID includes cryptographic keys for identity protection, it also contains your money address—a unique identifier that directs different payment systems where to send your funds.

Get Started with DAPs

The DAP ecosystem has two key actors: the payment platform that offers DAPs and the users who own the DAPs.

For Organizations: Any organization can enable users to create a DAP on their platform by setting up a DAP registry associated with their domain. This registry serves two main functions:

It allows users to sign up for DAPs. It maps users' DAPs with their DID and money address.

For Users: Once a DAP registry is available on your preferred platform, you can sign up for a DAP using your chosen handle.

If you're eager to experiment with DAPs but your preferred payment platform hasn't implemented a DAPs registry yet, you can obtain a DAP via our static DAP registry.

Keep Up to Date

DAPs debuted during a company-wide Hackathon at Block, where TBD, Cash App, and Square teams collaborated to bring this vision to life. As the DAP implementation continues to evolve, here are a few ways you can stay involved:

Join the TBD Discord Read the DAP specification Contribute to the open source SDKs: dap-js dap-go dap-kt dap-dart Create a DAP in our static DAP registry

Watch the video below to learn more

Tuesday, 30. July 2024

Anonym

3 Things to Know About the New American Privacy Rights Act 

Here at Anonyome, we stay abreast of current thinking and conversation around privacy and identity management, from technology innovations to regulatory changes.  Today, we’re looking at the long-running issue of a potential US federal data privacy law because there’s been some significant movement.  You can catch up on the story here and here. You might […] The post 3 Things to Know A

Here at Anonyome, we stay abreast of current thinking and conversation around privacy and identity management, from technology innovations to regulatory changes. 

Today, we’re looking at the long-running issue of a potential US federal data privacy law because there’s been some significant movement. 

You can catch up on the story here and here. You might also like The US Data Privacy Law “Floor”: What Deserves Basic Protections? 

The latest news on this simmering topic is that the draft American Privacy Rights Act was introduced on April 7, 2024. It’s a bipartisan US federal data protection law drafted by Congresswoman Cathy McMorris Rodgers (R-WA 5th District) and Senator Maria Cantwell (D-WA) that aims to give US citizens greater control over their personal data, limiting the ability of big tech firms to process, transfer and sell the information. 

The draft American Privacy Rights Act also mandates that companies must meet stronger cybersecurity standards to protect the personal data they hold from being hacked or stolen and gives enforcement powers for violations to the Federal Trade Commission (FTC), states and individuals. 

Three of the draft bill’s key provisions are: 
 

Limiting the data that companies can collect, keep, and use about people to what those companies actually need in order to provide them with products and services   Boosting powers for citizens to control how companies use their personal data, such as preventing companies from transferring or selling their data, and opting out of data processing if a company changes its privacy policy  Requiring companies to obtain express consent before transferring sensitive data to a third party. 

 
In releasing the draft bill, Congresswoman Rodgers said: “This landmark legislation gives Americans the right to control where their information goes and who can sell it. It reins in Big Tech by prohibiting them from tracking, predicting, and manipulating people’s behaviors for profit without their knowledge and consent. Americans overwhelmingly want these rights, and they are looking to us, their elected representatives, to act.” 

Rodgers and Cantwell argue that “… their draft legislation represents the best opportunity in decades to establish a national data privacy and security standard in the US.” It’s intended to enhance the patchwork of legislation that’s been knitted together at the state level in the US for several years, which will make data privacy protections more consistent for US citizens nationally and reduce the compliance burden on businesses

Importantly, the draft American Privacy Rights Act is based on similar principles to the EU’s General Data Protection Regulation (GDPR). The US is one of the only major global economies without strong national privacy laws akin to the GDPR.  

While there’s optimism for this bill, we’ve been at this point a few times with US national privacy legislation, most recently in 2022. And some question whether it goes far enough. The Electronic Frontier Foundation (EFF) isn’t impressed, saying Americans deserve more than the current bill. Their position is that “a new federal bill would freeze consumer data privacy protections in place, by pre-empting existing state laws and preventing states from creating stronger protections in the future. Federal law should be the floor on which states can build, not a ceiling.” 

The EFF also wants portions of the bill strengthened:  

Making it easier to sue companies that violate consumer rights   Limiting sharing with the government   Expanding the definition of sensitive data  Narrowing exceptions that allow companies to exploit consumer biometric information, so-called “de-identified” data, and data obtained in corporate “loyalty” schemes

Read the EFF’s position here.  

At Anonyome, we see the introduction of this bill as yet another moment on the long road to a federal data privacy law where we hope lawmakers don’t allow their desire for the perfect (and un-passable) law to get in the way of passing a good law that raises the bar for the approximately 30 US states and territories yet to pass privacy legislation. And we’ll watch closely when the lobbyists get to it: we expect more carve-outs, US state pre-emption concerns, and questions over whether it will get passed before November. 

As always, watch this space. 

The post 3 Things to Know About the New American Privacy Rights Act  appeared first on Anonyome Labs.


KuppingerCole

Asking Good Questions About AI Integration in Your Organization – Part II

The integration of AI poses both unprecedented opportunities and challenges for organizations. Our webinar, "Asking the Right Questions: Navigating AI Integration in Enterprise Security," addresses the pressing need for CISOs, CIOs, and other information risk management professionals to navigate the complexities of AI adoption effectively. Led by John Tolbert, Cybersecurity Director at KuppingerCo

The integration of AI poses both unprecedented opportunities and challenges for organizations. Our webinar, "Asking the Right Questions: Navigating AI Integration in Enterprise Security," addresses the pressing need for CISOs, CIOs, and other information risk management professionals to navigate the complexities of AI adoption effectively. Led by John Tolbert, Cybersecurity Director at KuppingerCole Analysts, Dr. Scott David, LL.M., Executive Director - Information Risk and Synthetic Intelligence Research Initiative at the University of Washington – APL, and Matthias Reinwarth, Director Practice IAM, Senior Analyst at KuppingerCole Analysts, this session offers a deep dive into the pivotal role of asking good questions in guiding organizations through the maze of emerging AI risks. 

This Trio’s expertise sheds light on the growing responsibility of CISOs in coordinating information risk mitigation, particularly in the realm of AI-amplified risks. Through a questions-based approach, attendees will gain insights into strategies for fostering cross-departmental coordination and shaping discussions across the enterprise. The webinar series aims to equip participants with practical tools to address AI-related concerns across various organizational functions, fostering interoperability and revealing best practices amidst the current lack of established standards. 




Thales Group

Thales and Garuda Aerospace sign MoU for secure drone operations in India

Thales and Garuda Aerospace sign MoU for secure drone operations in India prezly Tue, 07/30/2024 - 15:00 Thales has signed a memorandum of understanding (MoU) with Garuda Aerospace to promote growth and innovation in the drone sector in India. Under the agreement, Thales will provide expertise in the field of Unmanned Traffic Management (UTM) solutions, UAV detection, and syste
Thales and Garuda Aerospace sign MoU for secure drone operations in India prezly Tue, 07/30/2024 - 15:00 Thales has signed a memorandum of understanding (MoU) with Garuda Aerospace to promote growth and innovation in the drone sector in India. Under the agreement, Thales will provide expertise in the field of Unmanned Traffic Management (UTM) solutions, UAV detection, and system integration, whilst Garuda will bring its skills in the manufacture and use of UAVs, as well as its expertise in the Indian market. The MoU aims to provide a platform for a strategic collaboration to develop the drone ecosystem in India.
©Thales

Thales, a global leader in the aerospace industry, and Garuda Aerospace have signed a Memorandum of Understanding (MoU) to promote the development of the drone ecosystem in India. This collaboration aims to foster innovation and to advance the development of technological solutions that can enable safe and secure drone operations and help the growth of drone-based applications in India.

In addition to its broad expertise in the field of UTM solutions for the seamless management of Unmanned Aerial Vehicle (UAV) flight authorisations, Thales offers a range of radar and sensors for high-performance UAV detection, as well as being experienced in system integration. Garuda Aerospace, known for its expertise in the Indian market, is a leader in UAV manufacturing, and has extensive knowledge of the production of high-tech UAVs and service applications.

Established in 2015, Garuda Aerospace is a key player in the Indian drone sector, catering to the diverse needs of the industry. The company focuses on building advanced drone solutions for the armed forces, in collaboration with global giants in the defence and aerospace sectors. It also has a vast fleet of over 2500 drones and 4000 pilots across 400 districts.

Thales is recognised around the world for its expertise in aerospace and UAV solutions. From design and development to implementation and maintenance, Thales has built end-to-end solutions for drone integration and the development of advanced UTM systems. The company works closely with civil aviation authorities and air navigation service providers to deliver strategic UTM capabilities, including registration, authorisation and geo-awareness, while ensuring that incremental capabilities, such as aircraft tracking and deconfliction, can be added in the long term. ​

The MoU aims to transform the Indian drone landscape, and will come into effect in August 2024.

Ashish Saraf, VP and Country Director, Thales in India, stated: “The government is providing a robust foundation for the drone ecosystem, fostering opportunities for collaboration, innovation, and growth. We are proud to partner with Garuda Aerospace in paving the way for the development of advanced UTM systems in India by leveraging our extensive global experience and expertise in aeronautical solutions. This collaboration aligns well with the Aatmanirbhar Bharat (Self-Reliant India) vision, and seeks to support India in realising its ambition to become a major global hub for drones by 2030.”

Speaking on the partnership, Agnishwar Jayaprakash, Founder CEO, Garuda Aerospace, said: “We are thrilled to partner with Thales in driving technological innovations for the development of drones and drone-based applications in India. Ever since Honourable Prime Minister Shri Narendra Modi ji launched 100 Garuda drones in 100 Villages, we have scaled and cemented market dominance in the precision agri drone segment where 50% of agri drones in India is Garuda's. Equipped with the largest fleet in India coupled with Thales’ UTM technology and their worldwide experience, Garuda Aerospace will aim to revolutionize the drone sector and play a key role in the transformation of India into a global drone powerhouse.”

About Thales

Thales (Euronext Paris: HO) is a global leader in advanced technologies specialized in three business domains: Defence & Security, Aeronautics & Space, and Cybersecurity & Digital identity.

It develops products and solutions that help make the world safer, greener and more inclusive.

The Group invests close to €4 billion a year in Research & Development, particularly in key innovation areas such as AI, cybersecurity, quantum technologies, cloud technologies and 6G.Thales has close to 81,000 employees in 68 countries. In 2023, the Group generated sales of €18.4 billion.

About Thales in India

Present in India since 1953, Thales is headquartered in Noida and has other operational offices and sites spread across Delhi, Bengaluru and Mumbai, among others. Over 2200 employees are working with Thales and its joint-ventures in India. Since the beginning, Thales has been playing an essential role in India’s growth story by sharing its technologies and expertise in Defence, Aerospace and Cybersecurity & Digital Identity markets. Thales has two engineering competence centres in India - one in Noida focused on Cybersecurity & Digital Identity business, while the one in Bengaluru focuses on hardware, software and systems engineering capabilities for both the civil and defence sectors, serving global needs.

About Garuda Aerospace

Garuda Aerospace is India’s leading Drone tech start-up focused on disrupting two major multi-billion-dollar sectors, Precision Agri Tech and Industry 4.0 upgradation. Garuda Aerospace is asset-light, recession-proof, agnostic, and focuses on eliminating labourers in the agricultural field with drones focusing on designing, building, and customization of Unmanned Aerial Vehicles (UAVs). Founded in 2015 with a team of 5, Garuda has scaled to 200+ member team having the largest drone fleet in India with over 2500 drones and 4000 pilots operating in 400 districts. Garuda Aerospace manufactures 30 types of drones and offers 50 types of services. Having served over 750 clients including TATA, Godrej, Adani, Reliance, Swiggy, Flipkart, Delhivery, L&T, Survey of India, SAIL, NTPC, IOCL, Smart cities, Intel, Amazon, Wipro, IISC, MIT Boston, NHAI for various projects, the company recently partnered with global giants such as Lockheed Martin, Cognizant and Elbit Systems. Hon’ble Prime Minister Shri Narendra Modi Ji launched the drone yatra where 100 drones were flagged off simultaneously across 100 villages in India. Garuda Aerospace is the first drone company to get DGCA approvals for Type Certification and Remote Pilot Training Organisation. Garuda is on a mission to impact 1 billion lives positively using affordable precision Drone Technology. Mahendra Singh Dhoni has invested in the company and is the Brand Ambassador.

PRESS CONTACTS

Thales, Communications in India

Pawandeep Kaur

+91 120 40 20 555

Pawandeep.kaur@thalesgroup.com

Chase India

Prakhar Mishra

+91 6394794255

Prakhar@chase-india.com

/sites/default/files/prezly/images/Design%20sans%20titre%20%2818%29.png Documents [Prezly] Thales and Garuda Aerospace sign MoU for secure drone operations in India.pdf Contacts Alice Pruvot, Head of Media Relations, Aeronautics & Defense 30 Jul 2024 Type Press release Structure Aerospace India Thales, a global leader in the aerospace industry, and Garuda Aerospace have signed a Memorandum of Understanding (MoU) to promote the development of the drone ecosystem in India. This collaboration aims to foster innovation and to advance the development of technological solutions that can enable safe and secure drone operations and help the growth of drone-based applications in India. prezly_677942_thumbnail.jpg Hide from search engines Off Prezly ID 677942 Prezly UUID 97b0a5f3-a6c4-4b08-9737-bee57df0624b Prezly url https://thales-group.prezly.com/thales-and-garuda-aerospace-sign-mou-for-secure-drone-operations-in-india Tue, 07/30/2024 - 17:00 Don’t overwrite with Prezly data Off

liminal (was OWI)

2024 Q2 Market and Investment Trends Report

The post 2024 Q2 Market and Investment Trends Report appeared first on Liminal.co.

Ayan Works

How Can Reusable IDs Transform Your Identity Verification Process?

Identity verification is a fundamental part of our daily interactions, whether opening a new bank account, checking into a hotel, or making an online purchase, identity verification is essential. Indeed, traditional methods of verification are often slow, repetitive, and fraught with security risks. Enter reusable IDs — solutions which are designed to take on such challenges. The Problem with Tra

Identity verification is a fundamental part of our daily interactions, whether opening a new bank account, checking into a hotel, or making an online purchase, identity verification is essential. Indeed, traditional methods of verification are often slow, repetitive, and fraught with security risks. Enter reusable IDs — solutions which are designed to take on such challenges.

The Problem with Traditional Identity Verification Repeated ID Requirements:

Users are often required to present more than once their identities such as driver’s license, national id, or passport for KYC (Know-Your-Customer) checks when registering with any service provider(s).

Costly Verification Processes:

Business entities face higher cost (in terms of expense and time required) in confirming the identity of the same person across different platforms. According to industry reports, large financial institutions potentially spend in the range of $250 to $500 million annually on maintaining KYC and AML (Anti-Money Laundering) compliance. For the top 10% of these institutions, this figure can reach about $100 million​. Source

According to a whitepaper published by Consult Hyperion, a single KYC verification can cost between $13 and upwards of $130. For example, if a customer is required to complete five separate KYC verifications for the same bank, the bank incurs a total cost of $65. However, by implementing reusable identity as a service, the bank only needs to verify the customer once, keeping the cost at $13 and saving significant resources. This not only reduces financial outlays but also frees up personnel for more strategic tasks.

Consider the impact on a bank with 100,000 customers:

Without reusable KYC, verifying each customer five times costs the bank $65 per customer, totaling $6.5 million. With reusable KYC, the cost per customer is reduced to $13, resulting in a total cost of $1.3 million.

This translates to a potential savings of $5.2 million, illustrating the substantial financial benefits of adopting reusable KYC solutions on a large scale.

Complexities in Online Verification:

While in-person identity verification is straightforward, verifying identities online presents significant challenges. According to a Moneycontrol report in 2019, the global commercial banking sector suffered a significant loss of USD 3.3 trillion due to abandoned onboarding applications.

Reliance on Physical Identification:

Despite technological advancements, there is still heavy reliance on physical IDs, like birth certificates and driver’s licenses, in the digital world.

Introducing Reusable IDs: A Modern Solution

What Are Reusable IDs?

Reusable IDs are secure, portable and interoperable digital representations of a person’s verified identity. Instead of undergoing repeated verification processes for each service, users can verify their identity once and business entities can reuse this verified information across multiple departments. This innovation significantly enhances security, efficiency, and user experience.

How Do Reusable IDs Work?

Reusable IDs combined with advanced technologies like Decentralized Identifiers (DIDs), Verifiable Credentials (VCs), and digital wallets ensure secure and privacy-preserving identity verification. After a user proves their real identity through an ID verification service by submitting legal ID and taking a selfie, their verified identity is stored securely in a digital credential within a digital wallet app.

This clearly departs from traditional practices where every institution verifies new customers individually. This allows securely sharing of verified identity data across institutions with user consent and reduces redundancy to enhance operational efficiency and security.

The Advantages of Reusable IDs

Enhanced Security:

The advanced encryption and authentication technologies used in reusable IDs reduce risks of identity theft and other fraudulent activities that could arise from having multiple verification channels. It ensures privacy attributes through zero-knowledge proofs, selective disclosure, and other equivalents that allow the user to only disclose very relevant information while maintaining privacy.

Streamlined Operations:

By eliminating redundant verification steps, reusable IDs reduce administrative burdens and operational costs. Businesses can reallocate resources more effectively, improving efficiency and customer service. Standardized verification procedures also simplify compliance, ensuring consistent and reliable identity verification across different institutions.

Improved User Experience:

Reusable IDs offer a better user experience by making transactions much easier and simplifying onboarding processes. The ability to use just one verified identity across services promotes trust and loyalty with customers. Additionally, reusable IDs empower users to control their personal data, deciding who has access to their information and when.

Sustainability and Alignment with SDGs:

Implementing reusable IDs contributes to environmental sustainability by reducing the need for paper-based verification processes. By transitioning to digital identities, organizations can significantly decrease their paper consumption, which supports Sustainable Development Goal 12.5. This not only minimizes the environmental impact but also aligns with global efforts to promote sustainable practices and reduce waste.

Real-World Applications and Benefits For Financial Institutions:

Reusable IDs ease customer onboarding, saving time and resources spent on repeated identity checks. This enhances operational efficiency due to a reduction in time and resources spent on manual verification, improving customer satisfaction and experience. This also reduces the costs incurred from several verification events.

For E-Government Services:

Through digital channels, governments can increase value-added services such as e-filing of taxes, issuance of permits, and even voting. Using reusable identities, a citizen can securely engage in such services digitally through their pre-verified digital identities.

Rather than repeatedly providing and verifying personal information for each government interaction, citizens benefit from the simplicity of using their verified digital identity. This approach streamlines access to government services online, promoting efficiency and user convenience across diverse digital platforms.

For Healthcare Providers:

Reusable IDs enable quicker access to services by verifying patient identities swiftly and securely. This approach enhances patient care and simplifies compliance with privacy regulations that protect crucial medical information.

Verifying Age for Online Services:

With reusable identities, users can confirm their age for restricted online services, such as buying alcohol or viewing adult content, without revealing other personal details. This approach protects user privacy while meeting legal requirements, offering a secure and efficient solution for age verification.

Advantages of Reusable Identities for Businesses Cost Effective Solution:

Reusable identity solutions offer substantial benefits for both consumers and businesses. For businesses, they provide an effective cost solution for authenticating users without storing sensitive data.

Enhanced user experience:

A smooth user experience is crucial for customer acquisition, studies indicate that many potential customers abandon sign-up processes due to poor user experiences.

Simplifying the sharing of verified IDs with new service providers reduces friction, building on users’ initial efforts in document and facial verification. This streamlined approach not only boosts customer satisfaction but also enhances operational efficiency for businesses.

Driving Revenue Growth:

According to research by McKinsey, even a slight improvement in customer satisfaction during onboarding — measured on a scale of ten — can lead to a 3% increase in customer revenue.

For a company earning $500 million from new customers, each incremental satisfaction point translates into an additional $15 million in potential revenue. This underscores the importance of prioritizing customer satisfaction during the initial user experience to drive substantial revenue growth.

According to Gartner Research, over half a billion people will be using verifiable credentials by 2026, and the decentralized identity market is projected to be worth over $3 billion by 2031. This rapid growth in the market suggests significant opportunities for businesses that leverage reusable Identity softwares, positioning them to capture increased market share and drive substantial revenue growth.

Conclusion: Embracing the Future of Identity Verification

Reusable IDs represent a transformative shift in identity verification, offering unparalleled security, efficiency, and user convenience. By addressing the pain points of traditional methods, reusable IDs empower businesses to enhance operational standards, comply with regulations, and build stronger customer relationships.

CREDEBL is designed to transform your identity verification process with its innovative reusable ID solution. Leveraging advanced technologies, CREDEBL provides secure, efficient, and user-friendly verification, helping businesses enhance their operations and build stronger connections with customers. As an open-source platform and a recognized Digital Public Good (DPG), CREDEBL offers transparency, accessibility, and widespread adoption, making it an ideal choice for organizations seeking reliable and advanced identity solutions.

With CREDEBL, sensitive information is protected with features like Zero-Knowledge Proofs and selective disclosure, ensuring that users control their personal data and share only what is necessary.

Ready to revolutionize your identity verification process? Contact us today to discover how CREDEBL can transform your operations and elevate your customer experience.


Shyft Network

Veriscope Regulatory Recap: July 8th to July 27th

Welcome to the latest issue of the Veriscope Regulatory Recap. In this edition, we explore the significant regulatory updates in South Korea and Italy, focusing on how these changes impact the cryptocurrency landscape. South Korea Implements Comprehensive Crypto Regulations South Korea has recently enacted its first comprehensive set of cryptocurrency regulations, known as the Virtual Asset User

Welcome to the latest issue of the Veriscope Regulatory Recap. In this edition, we explore the significant regulatory updates in South Korea and Italy, focusing on how these changes impact the cryptocurrency landscape.

South Korea Implements Comprehensive Crypto Regulations

South Korea has recently enacted its first comprehensive set of cryptocurrency regulations, known as the Virtual Asset User Protection Act.

Triggered by substantial market disruptions in the past, this legislation is now fully in effect and imposes stringent requirements on digital asset exchanges.

(Image Source) What are the Key Mandates?

Key mandates include securing the majority of customer crypto holdings in cold storage and ensuring that all customer cash deposits are managed through licensed banks.

Additionally, exchanges are required to set up advanced systems to monitor and report any suspicious trading activities, significantly boosting the security and transparency of digital transactions.

Possible Impact on the Crypto Industry

It could have both pros and cons for the industry. On one hand, these regulations could enhance market safety and transparency, which could attract more investment. However, the resulting increased compliance costs and stringent controls might inhibit operational flexibility and innovation within the sector.

Italy Prepares for MiCA Implementation

Italy is gearing up to implement the European Union’s Markets in Crypto-Assets (MiCA) regulation. This framework aims to stabilize the financial system while fostering innovation and consumer protection in the digital asset space.

The Bank of Italy is set to release guidelines that will detail how these new rules should be applied, marking a significant step towards harmonizing crypto regulations across Europe.

(Image Source) Implications for the Crypto Industry

The upcoming guidelines from the Bank of Italy are set to clarify how the new rules should be applied, potentially boosting investor confidence by creating a consistent regulatory environment across Europe. While this promotes a safer investment climate, the detailed requirements could pose challenges for smaller entities, possibly restricting market diversity.

Overall, the regulatory updates in South Korea and Italy are crucial for crypto market stakeholders, as they establish clearer rules and enhance the industry’s overall security and trustworthiness.

Carret, OmniEX, and Mudrex Choose Shyft Veriscope and User Signing for FATF Travel Rule Compliance

Carret, OmniEX, and Mudrex, prominent names in India’s crypto scene, have adopted Shyft Network’s Veriscope and User Signing to comply with the FATF Travel Rule. This integration facilitates direct and secure exchanges of Travel Rule data and cryptographic proof from users’ wallets, which is required for Travel Rule compliance.

By choosing Shyft Veriscope & User Signing, the firms are reinforcing their commitment to adhering to global regulatory standards while prioritizing user experience and security.

Interesting Reads

Zero-Knowledge: The Future of More Secure and Scalable Blockchain

The Rising Focus on L2 Solutions in the Crypto Ecosystem

FATF Travel Rule Compliance Guide for Gibraltar

A Guide to FATF Travel Rule Compliance in Liechtenstein

‍About Veriscope

‍Veriscope, the compliance infrastructure on Shyft Network, empowers Virtual Asset Service Providers (VASPs) with the only frictionless solution for complying with the FATF Travel Rule. Enhanced by User Signing, it enables VASPs to directly request cryptographic proof from users’ non-custodial wallets, streamlining the compliance process.

For more information, visit our website and contact our team for a discussion. To keep up-to-date on all things crypto regulations, sign up for our newsletter and follow us on X (Formerly Twitter), LinkedIn, Telegram, and Medium.

Veriscope Regulatory Recap: July 8th to July 27th was originally published in Shyft Network on Medium, where people are continuing the conversation by highlighting and responding to this story.


IDnow

Top 8 findings from the UKGC’s Gambling Survey for Great Britain.

First a gambling white paper, then a series of industry consultations, now a public survey. Is the United Kingdom Gambling Commission the industry’s most responsible regulator? You know how it is. You wait for one significant regulatory change or shift in legislative focus and then they all come at once. It’s safe to say that […]
First a gambling white paper, then a series of industry consultations, now a public survey. Is the United Kingdom Gambling Commission the industry’s most responsible regulator?

You know how it is. You wait for one significant regulatory change or shift in legislative focus and then they all come at once.

It’s safe to say that the UK’s gambling industry has been in a state of turmoil for a few years now. In 2020, Boris Johnson launched a review of the Gambling Act 2005, raising concerns that the document might need to be updated to apply to the digital age. Indeed, it absolutely did, as at the time the 2005 Act was created, almost all gambling took place in brick-and-mortar betting shops, casinos and racetracks. Now, the industry makes more than 40% of its £14 billion annual revenue from online gambling. 

In 2023, the UK gambling white paper (‘High Stakes: Gambling Reform for the Digital Age’) finally arrived. A series of consultations was carried out throughout 2023 and 2024 to address a) the gaming experience; b) support services and c) promotion of responsible gambling, including, of course, the implementation of the controversial affordability checks. 

Now, following hot on the heels of the new and upcoming regulatory changes to the UK’s gambling environment, the United Kingdom’s Gambling Commission has released its first ever Gambling Survey for Great Britain (GSGB). 

“Data in this report represents the first year of a new baseline, against which future changes can be compared and as such will prove invaluable in deepening further our understanding of gambling across the country,” said Tim Miller, Executive Director of Research and Policy.

Understanding the Great British public’s attitudes toward gambling.

Produced by National Centre for Social Research and the University of Glasgow and featuring responses from 9,804 participants, there are four main aims of the GSGB: 

Identify who is gambling   What types of gambling activities do they participate in.  Reasons and experiences for gambling   Consequences of gambling, on both the player and their family.

“We welcome the UKGC’s efforts in gathering insights into the gambling habits of the UK market. This information will enable operators to better serve the gambling community and ensure the UKGC continues to be regarded as the industry’s most progressive and most forward-thinking regulator. We also applaud the steps taken by the UKGC since the release of the gambling white paper, especially regarding the implementation of affordability checks.

At IDnow, we offer the industry’s most comprehensive affordability check to ensure the checks are carried out seamlessly for both those who need it and those who don’t.

Bruce Glover, UK Gambling Manager at IDnow.
8 most interesting findings from the GSGB. 48% of adults aged 18 and over participated in some form of gambling over the last 4 weeks.
Participants are more likely to gamble online (37%) than in person (29%).
Most common reasons for gambling:
a) The chance of winning big money (86%)
b) Because gambling is fun (70%)
c) To make money (58%)
d) Because gambling is exciting (55%) Consequences from gambling.

While the vast majority of those who gamble do so recreationally, spending an average of just £2.70 per week, there are 300,000 people estimated to be engaged in problem gambling. 

Financial risk checks, formerly known as affordability checks in the UK, were essentially designed to protect players from gambling online with money they cannot afford to lose. Operators conduct them to ensure they are not encouraging problem gambling, and that their products and services are being used responsibly.  

Similar in concept to the ability-to-pay checks conducted by financial service providers of credit cards and loans, financial risk checks run in the background, ultimately satisfying the operator that the gambler has sufficient funds to continue placing bets safely. 

Assessing the size and scale of problem gambling in the UK was deemed an important part of the survey. No topic has been more polarizing to the gambling industry than the proposed financial risk checks. Read our two blogs, ‘Why 6 financial risk check indicators are better than 2.’ and ‘Worth the risk. Arguments against affordability checks in gambling.’ 

The GSGB outlines two different consequences from problem gambling:

Definition of severe adverse consequences:

– Losing something of significant financial value (home, job, business etc).

– Relationship breakdown with spouse or family members.

– Experiencing violence or abuse.

– Committing crime to fund gambling or gambling debts. 

Definition of potential adverse consequences:

Reducing or cutting back spending on everyday items like food, clothing and bills.

– Using savings or increasing credit.

– Experiencing conflict or arguments with friends, family and colleagues.

– Feeling isolated from other people, left out or alone.

– Lying to family and others to hide extent of gambling.

– Performing poorly at work or study.

Challenges in Compliance Survey Download to discover the top concerns for gambling operators, what causes players to abandon onboarding, and the likely effect of UK’s upcoming financial risk checks. Download now Most common (1.6%) severe consequence of gambling was relationship breakdown.  Most common adverse consequences were reducing spending on everyday items (6.6%), lying to family (6.4%) and feeling isolated (5.5%).  Male participants who had gambled in the past 12 months (3.5%) were more likely than female participants (2.1%) to experience at least one severe consequence due to gambling.  5.9% of adults aged 18-34 were likely to experience at least one severe consequence from gambling compared to just 0.7% of adults aged 55 and over.  3.2% of adults who’d gambled in the last 12 months had sought support. The Great British balancing act.

Creating a marketplace that is attractive for competitors to operators yet safe and secure enough for players to gamble responsibly is no easy feat. In launching the inaugural annual GSGB (which is due to double in size and scope next year), the UKGC is taking significant steps towards achieving this aim. 

Our highly configurable platform for identity in the gaming market allows operators to keep abreast of constantly changing regulations and new fraud schemes and scams, while ensuring a safe and secure gaming experience, from the UK to LATAM. With our latest round of platform enhancements, we now offer UK-specific data checks, and financial risk checks, which provide operators with insights into affordability indicators to online gamblers and comply with the latest updates to the UK Gambling Act.  

Read more about the UKGC’s plans for the UK’s gambling sector in our, ‘From paper to policy: What’s next for the UK’s gambling sector?’ blog

For more information on the UK’s ever-evolving regulatory framework, read our interview, ‘Controlling the comfort zone, with Roger Redfearn-Tyrzyk, VP Global Gaming at IDnow.’

By

Jody Houton
Senior Content Manager at IDnow
Connect with Jody on LinkedIn


KuppingerCole

Sep 12, 2024: The Security You Need: Seamlessly Integrating PAM and IGA for Ultimate Protection

In today's rapidly evolving cybersecurity landscape, organizations face significant challenges in integrating Privileged Access Management (PAM) and Identity Governance and Administration (IGA) systems. The complexity of integration, especially with legacy systems, coupled with the need to scale for cloud environments, poses substantial hurdles for IT professionals seeking to enhance their security
In today's rapidly evolving cybersecurity landscape, organizations face significant challenges in integrating Privileged Access Management (PAM) and Identity Governance and Administration (IGA) systems. The complexity of integration, especially with legacy systems, coupled with the need to scale for cloud environments, poses substantial hurdles for IT professionals seeking to enhance their security posture.

Eviden DirX Directory

by Nitish Deshpande This KuppingerCole Executive View report looks at Eviden’s DirX Directory solution from its DirX portfolio. DirX Directory is a directory service supporting integration with LDAP servers. This report will take a look at the features of its latest release with an overview of its strengths and challenges.

by Nitish Deshpande

This KuppingerCole Executive View report looks at Eviden’s DirX Directory solution from its DirX portfolio. DirX Directory is a directory service supporting integration with LDAP servers. This report will take a look at the features of its latest release with an overview of its strengths and challenges.

BlueSky

Schedule and Cross-Post with Buffer

We're thrilled to announce that Bluesky has partnered with Buffer, a social media toolkit with scheduling and cross-posting features.

We're thrilled to announce that Bluesky has partnered with Buffer, a social media toolkit with scheduling and cross-posting features, to make posting to Bluesky even easier!

Using Buffer to post to Bluesky.

With Buffer’s scheduling feature, you can plan and organize your Bluesky posts alongside your other social media content. This means you can maintain a consistent presence on Bluesky without the need to switch between multiple apps or platforms.

At Bluesky, we’re building an open social network. By teaming up with Buffer, we’re making it easier for you to seamlessly integrate Bluesky into your broader social media habits, breaking down barriers between platforms and giving you more control over your online presence.

Ready to get started? Head over to Buffer's website to learn more about how to connect your Bluesky account and begin scheduling your posts. Happy posting!

Monday, 29. July 2024

HYPR

How To Improve Okta Security

Okta is one of the most widely-used single sign-on (SSO) providers, making authentication more convenient for organizations and their users alike. We at HYPR use Okta. This convenience, however, comes at a price. Okta deployments are both highly targeted by and, as repeatedly demonstrated, highly vulnerable to attacks.

Okta is one of the most widely-used single sign-on (SSO) providers, making authentication more convenient for organizations and their users alike. We at HYPR use Okta. This convenience, however, comes at a price. Okta deployments are both highly targeted by and, as repeatedly demonstrated, highly vulnerable to attacks.

Perhaps the most notorious incident in recent years were the 0ktapus attacks. Over the course of several months, hackers were able to bypass Okta security processes to log into scores of corporate SSO instances, including Twilio, MailChimp and DoorDash. Okta customers were also the target of a string of social engineering attacks, where cybercriminals convinced IT help desk staff to reset MFA credentials of super admin accounts.

Okta's new passwordless authentication option, Fastpass, promises to mitigate these security weak points, but a recent analysis showed that it too can be bypassed. Moreover, most Fastpass implementations fall back to a password plus a less secure factor. While this helps minimize lockouts and disruptions, it opens another pathway in for attackers.

How Attackers Breach Okta Security Defenses Phishing and Credential Theft

Okta provides its customers with multiple forms of authentication for services so that organizations can enforce MFA. The most widely used forms include temporary codes delivered over SMS through Twilio or via authenticator apps. However, as demonstrated time and again, even with these MFA options enabled, attackers can break in fairly easily, gaining wholesale access to connected accounts and applications. This is because the standard Okta security approach is predicated on shared secrets, which can be phished or intercepted through a number of different techniques.

In the case of the 0ktapus attacks, security researchers from Group-IB found that the threat campaign targeted employees of companies that use Okta SSO, sending them text messages containing links to phishing sites that spoofed the Okta login page of their organization. Many of the phone numbers were obtained from a previous successful hack of cloud communications provider Twilio, which itself was hacked using the same methods.

Upon entering their login details and 2FA code, the attacker performs a simultaneous login process on the actual Okta page, gaining a session token and access. From there, attackers have wide-ranging potential for further escalation.

Illustration of the 0ktapus attack flow

Help Desk Social Engineering

The IT service desk is another vulnerable point in most organizations. Without strong identity verification processes in place, fraudsters can impersonate legitimate employees to gain access. For example, they may claim they lost their phone and need to reset their multi-factor authentication factors. This is what happened to multiple Okta customers in an orchestrated social engineering campaign linked to the Scattered Spider threat group. Malicious actors phoned the company’s IT help desk and convinced staff to reset the MFA settings of highly privileged Okta platform admin accounts. They went on to use their privileged access to compromise other applications across victim organizations.

Scattered Spider help desk social engineering attack

 

Tips to Improve Okta Security

These Okta security flaws show why more robust identity security protocols , that can resist phishing, interception and social engineering attacks, are necessary across all IAM procedures. Here we'll look at some actions you can take to strengthen Okta security in your organization.

1. At a Bare Minimum, Enable MFA on All User Accounts

Multi-factor authentication is enabled by default for admins under Okta security protocols and it should be the minimum authentication standard set for all users. As we’ve seen, however, traditional MFA can be easily breached, especially when phishable factors are used, such as passwords and one-time passwords (OTPs). SMS is particularly vulnerable — if traditional MFA is used, disable SMS as an option. Ideally you should deploy phishing-resistant MFA (see tip #6). 

2. Use Role-Based Access Controls (RBAC)

Regular reviews of what accounts have access to and strictly limiting admin-level powers to relevant users can reduce the impact of possible breaches. RBAC also sees account access checked and changed according to a user's current needs rather than maintaining past or previous access requirements. Keep the number of Super Admins — the highest level of admin privileges to an absolute minimum. This decreases the possibility of an attacker gaining access to these highly privileged accounts and causing even greater damage. 

3. Set Session Lifetime Rules

Enforcing stricter session lifetime rules on inactive sessions reduces the possibility of legitimate sessions being hijacked by attackers. This is especially important given that many employees now work outside the protected office environment.

4. Enable User Event Notifications

Important activity on a user’s account, such as sign-ins from a new device or changes to factors used on an account, can be flagged through Okta security notifications. This way, notifications can be quickly escalated by the user or admin. Beware, however, that users can develop fatigue from the number of notifications they receive from various accounts, so they may not give them the attention they deserve.

5. Don’t Use Your Identity Provider To Log In to Your Identity Provider

One of the most effective methods of relieving pressure on Okta security is by completely removing the authentication burden from the SSO in the first place. SSOs are effective services for easing workflows and managing access to a user's suite of applications; however, this places a significant target on its back for attackers seeking access to those user privileges. Separating the authentication providers from SSO providers and using a more secure passwordless authentication solution makes it more difficult for attackers to bypass.  

6. Use Phishing-Resistant Passwordless MFA

The single most effective method to strengthen your SSO’s security posture is to use IdP-agnostic, phishing-resistant, multi-factor authentication. One of the major Okta security issues is how easily attackers can phish, intercept or bypass MFA that uses SMS, OTPs or push notification. By removing passwords and phishable factors, and authenticating using biometric identifiers and public key infrastructure (PKI), you eliminate the potential for phishing, MFA bombing and man-in-the-middle attacks.

To learn more about what to look for in a passwordless solution read our Passwordless Security Evaluation Guide 7. Implement Measures to Protect Desktop Login and Offline Users

Unfortunately, the more secure Okta options do not easily extend to desktop login, VPN access or remote situations. Specific employee groups, such as those working in clean rooms, on the factory floor or in the field may also have access and device limitations. Make sure you have identity security processes in place to cover all your use cases and user populations.

8. Deploy Automated, Continuous Identity Verification

For most organizations, comprehensive identity verification is limited to specific points in time, such as employee onboarding. But there are other times when identity verification is equally critical, such as resetting a credential or registering a new device. The standard identity checks tend to be knowledge-based answers or calling the helpdesk — which, as evidenced, are prone to social engineering. Generative AI and deepfake technology have made these processes even more vulnerable to attack.  Multi-layered, risk-based identity verification combines a series of factors such as location, behavior, document verification and face recognition so that you can be certain that an identity is genuine.

Make Your SSO More Secure With HYPR

Recent attacks underscore the need for organizations to thoroughly scrutinize the security of their Okta deployment. The best approach to defend Okta and other SSO deployments is to integrate an end-to-end identity security system that strengthens the weakest and most vulnerable points.

HYPR, a trusted Okta partner, works in conjunction with Okta so your employees gain an optimized, frictionless experience and your organization gains a security-first identity strategy. HYPR integrates phishing-resistant passwordless authentication, adaptive risk mitigation, and automated identity verification to detect, prevent, and eliminate identity-related risks at every point in the identity lifecycle. It operates seamlessly with all major SSO providers, creating a single-action desktop-to-cloud authentication flow, with no password-based fallbacks.

Read more about the HYPR | Okta integration to learn how HYPR can painlessly solve your Okta security issues.


Spruce Systems

Digital Identities Need More Transparency: A Framework Proposal

We explore the potential of digital identity solutions to enhance privacy through selective disclosure, highlight the risks of verifier abuse, and propose a reasonable disclosure framework to standardize and safeguard data-sharing practices.

As we’ve previously discussed on this blog, new digital identity solutions, such as mobile drivers’ licenses, have incredibly exciting promise in terms of upgrading both security and privacy of the identity holder. In this post, I want to highlight yet another area where we, as an industry, still have work to do to make that promise of privacy a reality.

Promise of privacy-preserving disclosure

Digital identity specifications allow for the concept of “selective disclosure,” where ID holders can decide exactly what personal information they want to pass along to someone verifying their digital ID. The classic example is at a US bar with an over 21 age restriction – currently, a person presents a conventional physical driver’s license to prove their age, which also shows the bartender their name, address, and full date of birth. No bouncer needs to know all of that information just to confirm someone is able to responsibly enjoy a tasty beverage. A digital ID with selective disclosure, on the other hand, could allow you to prove you’re over 21 without revealing any other irrelevant or personal information.

There are scores of other examples where this concept of selective disclosure could be of benefit. An insurance card could now prove a person seeking health care has valid coverage, without revealing that person’s employer. An asylum seeker with a work permit form can prove their employability without having to reveal their country of origin. Digital identities can serve just as many purposes as their analog versions, but we benefit from them by reducing (or entirely removing) unnecessary data oversharing and leakage. 

Risks of power imbalance

However, there are also potential downsides to this model. While selective disclosure appears to put the power in the hands of the credential holder to choose what information they want to share, there is potential for abuse by the other entity in the disclosure transaction, namely the verifier.

Every credential data request involves both a verifier – the person seeking information, and an ID holder – the person with the information being sought. The verifier asks to confirm information about the holder, and the holder then “presents,” or sends, the requested digital data with an authenticating signature.

An unethical verifier might demand to see all information associated with a digital credential, not just the data needed for a particular transaction. They might be able to obscure a verification request in a way that an ID holder wouldn’t be made aware of all the data that was being requested or wasn’t given the opportunity to decide whether the request was reasonable prior to granting consent. Particularly given the nascency of this new model of interaction, holders of digital identity credentials will need help from industry practitioners to stay safe.

Reasonable Disclosure Framework

To that end, we see the potential for creating, socializing, and committing to standardized formats for common use cases, which we’re calling a Reasonable Disclosure Framework.

This shared format for data requests would give future users the full power of digital ID technology by ensuring data requests are transparent and that overreaching or deceptive requesters can be flagged. Moreover, since it would be based on an open standard, the framework would enable any organization to release a set of disclosure standards tailored to protect the privacy of their members or constituencies. 

One example might be a filter offered by an organization dedicated to the rights of retired Americans, which could be designed to be vigilant against threats of exploitation against identity scams often run against the elderly. Whereas another filter, created by a computer science professional organization, might opt to give its expert users more personal discretion.

These disclosure filters would be attached to a digital wallet, similar to common browser extensions. A user could install more than one filter, creating a composable and layered approach to disclosure control. 

This could look like a user installing a fairly permissive default disclosure filter, then specific, narrower filters for their individual habits and circumstances: an ecommerce-specific disclosure filter from a trusted eCommerce watchdog, and a medical information filter from a privacy foundation. These organizations would have specialized knowledge in particular niches, meaning they would be best equipped to track, evaluate, and flag excessive or malicious data requesters in that niche.

Informed Data Sharing

Each filter could set a level of disclosure that would be considered acceptable for the particular use case, including the types of data being requested and for what purpose. If a verifier exceeded these acceptable parameters, a user would be presented with a warning message or popup similar to what web browsers now use to warn us away from insecure websites. The message could be something as simple as “This request may lead to misuse of your personal information.”

This would allow an average holder at least a fighting chance to be aware of when and how their data was being collected, and to opt out of transactions that went against a documented standard of reasonable trust.

Enforceable Responsibility

This proposed system could also create accountability for deceptive credential requests and data misuse. Verification requests and usage disclosures would be cryptographically signed by requesters, just the same as digital IDs. Digital wallets would keep full records of these signed data requests and usage disclosures, so users could have detailed and accurate records of what information was asked for and disclosed in response to a specific request. 

Over time, anonymized statistical analysis of signed request records could be used to identify verifiers that mispresented their data needs or usage. One such use case might be an online store that was found to have sold user data to advertisers. Trust organizations that publish reasonable disclosure filters could bring consequences to verifiers who are prone to abuses. This would allow enforcement of acceptable data practices to be more directly driven by entities closer to the specific credential scenarios.

These standards for reasonable sharing, warnings of excessive requests, and punitive measures against deceptive disclosures would be the primary variables set by the diversity of disclosure filters enabled by this proposed framework.

Interoperable Protocols

The most important feature of the digital identity standards currently emerging from industry and government efforts is that they are open and interoperable protocols. That is, they are based on a set of technical and data standards that allow any party to issue, receive, or present a credential. Building off this guiding principle, any organization would be able to build and distribute a disclosure filter.

We hope you see the potential in this proposal to protect consumers and bring reasonable transparency to the fore as digital identities rise in usage and prominence. Please reach out if you have an interest in discussing this concept with us at SpruceID.

Get in Touch

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.


Verida

Verida Community Update: Protocol 4.0 and Personal AI

Hello, gm everyone! Chris Were here, the CEO and Co-founder at Verida 👋 I’m excited to share our latest community update, highlighting significant advancements and future plans for Verida. With the release of version four of our protocol, updates on our Self-Sovereign Compute Network, and a sneak peek into our vision for personal AI, there’s a lot to cover. Let’s dive in. Protocol Ver
Hello, gm everyone!

Chris Were here, the CEO and Co-founder at Verida 👋

I’m excited to share our latest community update, highlighting significant advancements and future plans for Verida. With the release of version four of our protocol, updates on our Self-Sovereign Compute Network, and a sneak peek into our vision for personal AI, there’s a lot to cover. Let’s dive in.

Protocol Version 4 Released

We’ve made a major leap forward with the release of version four of the Verida protocol. This update introduces an important separation between the Verida network and user identities. This change provides enhanced flexibility and security, enabling developers to leverage DID methods on various blockchains and networks. This is a game-changer for building more dynamic and secure decentralized applications.

Stay tuned for a detailed announcement next week, where we’ll share comprehensive release notes and insights into how this upgrade will empower our developer community.

Self-Sovereign Compute Network

We’re thrilled to announce the expansion of the Verida network to support Self-Sovereign Compute. This initiative will enable privacy-preserving compute, allowing your data to be used in innovative and secure ways. One of the most exciting applications of this technology is personalized AI.

We’re working on a draft technical light paper that outlines the architecture of our Self-Sovereign Compute Network. This document will serve as a foundation for collaboration and feedback from stakeholders, advisors, and experts in the field. We believe this is a pivotal step towards realizing the vision of self-sovereign AI, where data ownership and privacy are paramount. We aim to release this document soon, providing a comprehensive look at our plans and progress.

Verida Network Explorer

We are releasing an upgraded Verida Network Explorer. This new version will provide users with enhanced insights into network operations, complete with detailed graphs and a new nodes tab.

Upgraded Network Explorer, coming soon

This upgrade is the first step towards enabling more users to run nodes, stake, and utilize the core capabilities of the Verida network. Keep an eye out for the release announcement.

Data Connector Server Demo

Our recently announced personal data bridge is a revolutionary tool that allows users to connect to various APIs and pull their data into the Verida network.

Verida’s Personal Data Bridge

Behind the scenes, the Verida data connector server powers this functionality. This backend infrastructure enables users to synchronize data from platforms like Google, Facebook, Twitter, and more, into the Verida network, ensuring secure and private data management. To see the backend server demo in action, watch the community update video.

We’re finalizing the frontend interface, which will soon be available for users to seamlessly integrate their data with Verida, facilitating connections to personal AI assistants and other use cases.

Sneak Preview: Personal AI and Digital Twin

At Verida, we’ve always been about empowering individuals to own and benefit from their data. With the emergence of AI, we see a powerful opportunity to create personal AI agents that are private and secure. Imagine an AI that knows everything about you and helps you in your daily life, all while keeping your data private. This is the future we’re building towards.

We’re preparing to launch a landing page dedicated to personal AI and your digital twin. This page will showcase our vision for personal AI, provide insights into potential user interfaces, and invite you to join us in this exciting journey.

Sneak peek: Curious to meet our digital twin? Conclusion

Thank you for your continued support and interest in Verida. There’s a lot happening behind the scenes, and we’re eager to share these developments with you. Over the next few weeks, we’ll be making formal announcements and releasing more details about these exciting projects. Together, we’re building a future where you have control over your data and can leverage it in powerful, privacy-preserving ways.

Stay connected and watch this space for more updates!

Yours truly,

Chris Were — CEO & Co-founder, Verida

About Verida

Verida is a pioneering decentralized data network and self-custody wallet that empowers users with control over their digital identity and data. Utilizing cutting-edge technology such as zero-knowledge proofs and verifiable credentials, Verida offers secure, self-sovereign storage solutions and innovative applications for various industries. We are also at the forefront of developing privacy-preserving personalized AI solutions. For more information, visit Verida.

Verida Missions | X/Twitter | Discord | Telegram | LinkedInLinkTree

Verida Community Update: Protocol 4.0 and Personal AI was originally published in Verida on Medium, where people are continuing the conversation by highlighting and responding to this story.


TBD

7 Unexpected Ways Verifiable Credentials Are Used Today

Verifiable Credentials are more than a buzzword. Discover 7 real-world use cases revealing their hidden impact.

"Verifiable Credentials seem niche. Only a small group of technologists would be interested in this," a conference attendee declared after I enthusiastically described my company's work in the Global Payments and Self-Sovereign Identity (SSI) ecosystem.

This comment gave me pause. Less than a year ago, I first encountered Verifiable Credentials (VCs) - a W3C standard for digital credentials that state specific facts about individuals, organizations, or entities. My initial reaction was similar to the conference attendee’s response. However, after interviewing SSI builders on a weekly livestream, my perspective changed dramatically.

While VCs are relatively new and primarily explored by SSI enthusiasts, their benefits extend far beyond this group. Many of the services and products you use today already rely on them – even if you don't realize it.

From healthcare to content creation, finance to travel, VCs are changing how we share and verify information. Here's a look at seven real-world applications of VCs that may surprise you:

1. Mobile Driver’s Licenses

If you often misplace your wallet like me, there's good news — your state may support the use of Mobile Driver's Licenses (mDLs). Louisiana was the first state to implement mDLs in 2018. Since then, mDLs have been gaining traction across the United States. With mDLs, residents of participating states can store a digital version of their driver's license on their smartphones. This allows users to leave their physical ID at home and use their phones for identification in various situations, including travel, age-restricted services, voting, and law enforcement interactions.

VCs are the technology behind mDLs because they offer features like revocation, authentication, expiration, and tamper evidence. If you use an mDL, you're using VCs!

Source: https://lawallet.com/

2. The Pharmaceutical Supply Chain

Dr. Carsten Stöcker, CEO of Spherity, introduced me to a fascinating application of VCs in the pharmaceutical industry. Pharmacies often trade medicines with each other to maintain their stock, but this exchange requires complex verification processes. They must verify the legitimacy of three key elements:

the medicine, the organization they're trading with, the provider making the trade

Caro.vc, a Spherity company, employs VCs to simplify this process and reduce errors. Their solution allows pharmacies to quickly and securely verify all these elements, ensuring the integrity of the pharmaceutical supply chain.

To learn more about this use case, check out this discussion between Dr. Carsten Stocker and the Developer Relations Team at TBD:

3. Content Creation and Generative AI

Content creation has played a considerable role in our online lives for the past few years. For some, it has become a career, launching individuals into fame. However, whether you're an artist, seamstress, or photographer, people often steal and claim work that you made.

Additionally, a new form of content creation has hit the scene: generative AI. While many use it for productivity, others exploit it to spread misinformation or generate false images and videos in the likeness of others.

Organizations like Adobe, BBC, Microsoft, Sony, and Nikon formed the Content Authenticity Initiative and the Coalition for Content Provenance and Authenticity to address these issues. These groups developed Content Credentials, which use VCs to tackle the challenges of content theft and misuse of AI-generated content, aiming to protect creators’ rights.

Source: https://contentauthenticity.org/blog/community-story-wrapt

Click this link to read the case study, view the above image, view the above image, and view the image's content credentials.

4. Music Copyright

Musical artists typically don't own their masters, meaning they lack full control over their recordings when signing a record deal. Today, more musical artists are opting to own their masters to regain control over the distribution of their work. Cole Davis describes the publishing and distribution of music as a "music supply chain" and observed a disjointed process involving agreements through text messages, scattered emails between lawyers, and inconsistent procedures.

To address these issues, Davis built Switchcord, using VCs to provide cryptographic proof of who was involved in creating a song, when it happened, and what contracts were signed. This ensures all participants receive proper credit and compensation, preventing false claims.

To learn more about this use case, check out this discussion between Cole Davis and the Developer Relations Team at TBD:

5. Loan Applications

The U.S. credit system requires residents to accumulate debt as a prerequisite for obtaining loans, leading to more debt. Recent data from the Federal Reserve Bank of New York underscores this systemic issue, revealing that 1 in 5 applicants for mortgages, car loans, or other loans were rejected — the highest rate in five years.

FormFree is addressing this problem using the Web5 SDK to provide VCs for loan borrowers through their Passport product. Their approach involves creating an anonymized, tamper-proof credit profile as a VCs for lenders to review and make offers, aiming to put power back in the hands of the borrower.

To learn more about this use case, check out this discussion between the FormFree team and the Developer Relations Team at TBD:

6. Online Marketplaces and Catfishing

Unfortunately, online marketplace scams are common. From purchasing a car to renting a home or adopting a pet, you can buy and sell almost anything online. However, there's no foolproof mechanism to ensure the seller is trustworthy.

With the rise of social media, catfishing — where a person pretends to be someone they're not while online dating — has also increased dramatically. While many believe they're not susceptible, 23% of online dating participants reported being catfished, and 41% of catfish victims are between the ages of 18 and 34.

Jeffrey Schwartz created Dentity to reduce the frequency of scams and catfishing incidents. Dentity uses VCs to verify individuals on any platform, from dating apps to online marketplaces.

To learn more about this use case, check out this discussion between Jeffrey and the Developer Relations Team at TBD:

7. Native Tribes

Special Economic Zones (SEZs) are designated areas within a country that operate under different business and trade laws than the rest of the nation. These zones typically offer incentives like tax breaks and simplified regulations. The Catawba Indian Nation established their own SEZ called the Catawba Digital Economic Zone, with the goal of driving economic development, attracting businesses, and creating opportunities for tribal members.

The Catawba Digital Economic Zone is using the Web5 SDK to grant VCs to members. These VCs allow users to prove their identity and achieve regulatory compliance within the zone.

To learn more about this use case, check out this discussion between the Catawba Digital Economic Zone Team and the Developer Relations Team at TBD:

Learn More

Verifiable Credentials are making a tangible difference by solving real problems for real people - simplifying loan applications, protecting artists' rights, ensuring pharmaceutical safety, and supporting tribal sovereignty.

If you have ideas for building apps with VCs, check out these resources:

Build your own Verifiable Credentials with the Web5 SDK How TBD is using VCs in the tbDEX SDK TBD’s YouTube Channel

Sunday, 28. July 2024

KuppingerCole

CIAM Market Update 2024: Key Developments and Trends

In this episode, Matthias Reinwarth and John Tolbert discuss the consumer identity and access management (CIAM) market. They cover new entrants in the market, the impact of mergers and acquisitions, new features in CIAM products, deployment models, B2B functionality, decentralized identity, and the role of AI in CIAM.

In this episode, Matthias Reinwarth and John Tolbert discuss the consumer identity and access management (CIAM) market. They cover new entrants in the market, the impact of mergers and acquisitions, new features in CIAM products, deployment models, B2B functionality, decentralized identity, and the role of AI in CIAM.



Saturday, 27. July 2024

Safle Wallet

New features, integration and Partnerships

Weekly Safle Update! 🚀 Exciting times are ahead as we unveil stellar updates, including new integrations, strategic partnerships, and enhanced features. Join us on this thrilling journey through the latest advancements in the Safle ecosystem. Ready to explore? Buckle up and dive into our cosmic updates! 🚀 Product Updates 1️⃣ Solana Integration 🌟 Fast Transactions & Low Fees
Weekly Safle Update! 🚀

Exciting times are ahead as we unveil stellar updates, including new integrations, strategic partnerships, and enhanced features. Join us on this thrilling journey through the latest advancements in the Safle ecosystem. Ready to explore? Buckle up and dive into our cosmic updates!

🚀 Product Updates 1️⃣ Solana Integration 🌟 Fast Transactions & Low Fees: Solana is coming to Safle Wallet! New Features: Send and receive SOL tokens & NFTs effortlessly. 2️⃣ Safle Lens 🔭

Development Complete:

Easy Login: One-click login with SafleID, EVM Address, or ENS. Asset Snapshot: View your assets and their performance across supported EVM chains. Comprehensive View: See both your token and NFT holdings. Clean and Clear Safle Lens: Hide spam tokens in your SafleLens. Bye-bye spammy tokens! 3️⃣ Improved SafleID Generation⚡ Faster Relayer: We’re scaling our gas relayer for quicker SafleID generation and Faster Transactions. 4️⃣ Documentation 📚

Salfe Docs: Check out our newly released documentation.

SafleID Documentation - 👉 Read more Safle Vault Documentation - 👉 Read more Safle Wallet Documentation - 👉 Read more 🤝 New Partnerships 1️⃣ Zokyo

We’re excited to have Zokyo.io as our trusted audit partner! Their security review now fortifies Safle’s infrastructure. 💪🔐

2️⃣ Moralis

We’re pleased to partner with Moralis.io. Safle Wallet now uses Moralis for automatic data detection across multiple EVM chains, enhancing asset management. 🚀📈

📈 Marketing Growth 1️⃣ Community Buzzing 🌟 Increased Engagement: Safle’s community is thriving! We’re focusing on expanding our user base and increasing engagement. Strategic Campaigns: We’re launching targeted campaigns with Galxe and QuestN. These campaigns are designed to attract new subscribers and incentivize user participation. Enhanced User Interaction: These initiatives will drive deeper community engagement and attract more members to Safle, contributing to our growth and market presence. 2️⃣ The search continues 🔍

Got any DevOps Ninjas, Kickass Growth Marketers, Detail-Oriented QA Experts, or Innovative Blockchain Engineers in your circle? Send them our way, and we’ll take it from there. Check out our openings here and join Safle’s journey.

Download the Safle App Now!

Experience the power of Safle at your fingertips 🚀

🔗 Download SafleWallet

Join the Safle Community 🧑‍🚀

🔗 Join now

Friday, 26. July 2024

Extrimian

Age Verification in Online Gaming & Betting

The rise of online gaming and gambling has ushered in new challenges, particularly concerning underage access. Traditional age verification methods are proving inadequate, often breaching privacy or failing to prevent minors from accessing restricted content. Self-Sovereign Identity (SSI), or decentralized identity technology, emerges as a powerful solution to bolster age verification processes wh

The rise of online gaming and gambling has ushered in new challenges, particularly concerning underage access. Traditional age verification methods are proving inadequate, often breaching privacy or failing to prevent minors from accessing restricted content. Self-Sovereign Identity (SSI), or decentralized identity technology, emerges as a powerful solution to bolster age verification processes while upholding privacy standards.

Current Challenges in Age Verification: Ineffective Measures: Many platforms rely on easily manipulated self-reported data for age verification. Privacy Risks: Traditional methods compel users to disclose sensitive personal data, raising serious privacy concerns. Regulatory Complexity: Compliance with diverse international regulations complicates operations for global platforms. The Growing Concern of Child Gambling and Gaming Accessibility: Ease of Access: The widespread availability of digital devices gives minors easy access to online gambling and gaming sites. Targeted Marketing: Many sites deploy marketing strategies that resonate with younger audiences, inadvertently attracting minors. Inadequate Controls: Parental control features often fall short in effectively barring persistent minors from accessing age-restricted platforms. SSI’s Role in Streamlining Age Verification:

Self-Sovereign Identity offers a transformative approach to managing digital identities that can significantly enhance the enforcement of age restrictions:

Protecting User Privacy: SSI enables the verification of a user’s age without disclosing additional personal details. A verifiable credential could confirm a user is over a certain age limit without sharing their exact date of birth. Building Trust through Security: Leveraging blockchain, SSI ensures that identity credentials are secure and immutable, fostering trust between users and platforms. Universal Application: Once issued, SSI credentials can be reused across multiple platforms, simplifying compliance with varying global regulations. Steps for Effective Implementation: Integrating Technology: Seamless integration of SSI with existing digital platforms requires cooperation between technology developers, regulatory authorities, and platform operators. Promoting User Adoption: Widespread user adoption of SSI is crucial and can be encouraged through education and clear benefits over traditional methods. Regulatory Support: Regulatory bodies need to recognize and support SSI as a legitimate verification method, facilitating its broader acceptance and implementation.

The urgency to address underage access in online gaming and gambling necessitates a shift towards more reliable and privacy-centric verification methods. Self-Sovereign Identity holds promise in meeting these demands, offering robust solutions for platforms to enforce age restrictions effectively while safeguarding user privacy.

Explore More:

Discover how Extrimian is leveraging SSI to transform digital identity management and enhance online safety on our Digital Identity Solutions page.

The post Age Verification in Online Gaming & Betting first appeared on Extrimian.


Microsoft Entra (Azure AD) Blog

Migrate ADAL apps to MSAL with enhanced insights

We’re pleased to announce significant updates to the Sign-ins workbook in the Microsoft Entra admin center, a crucial tool for organizations transitioning from Azure Active Directory Authentication Libraries (ADAL) to Microsoft Authentication Libraries (MSAL). These updates aim to streamline the ADAL migration process by providing comprehensive insights into your ADAL-application-related data. &

We’re pleased to announce significant updates to the Sign-ins workbook in the Microsoft Entra admin center, a crucial tool for organizations transitioning from Azure Active Directory Authentication Libraries (ADAL) to Microsoft Authentication Libraries (MSAL). These updates aim to streamline the ADAL migration process by providing comprehensive insights into your ADAL-application-related data.

 

Why is this Important?

 

We announced ADAL end of life in June 2020 and stopped supporting any security updates as of June 2023—which means applications using ADAL can’t utilize the latest security features, leaving them vulnerable to future security threats.   We strongly recommend migrating any application using ADAL to MSAL to improve the security posture and resilience of authentication and authorization of your client applications.

 

MSAL supports the latest security features for Microsoft Entra ID like managed identity, Continuous Access Evaluation (CAE), passkeys, and many more. The updated Sign-ins workbook is an essential tool in this transition, providing the necessary insights and data to make informed decisions to execute migration.

 

What's new in the Sign-ins workbook?

 

The Sign-ins workbook is redesigned for admins needing a centralized and more detailed view of applications using ADAL within their tenant. These additional insights can help them identify, investigate, and validate the ADAL applications to successfully migrate to MSAL.   

 

Here’s what you can expect with the latest enhancements:

 

Comprehensive sign-in log aggregation: The workbook now consolidates logs from various types of sign-in events, including interactive, non-interactive, and service principal sign-ins. Enhanced data visualization: We updated the report with new aggregated metrics to enable an all-up view of sign-ins across ADAL applications. To aid in your specific analytical needs, the workbook supports the application of custom filters and queries. This flexibility enables you to focus on the information that matters most to your ADAL migration efforts. Integration with Microsoft Entra recommendations: You can now directly access this Sign-Ins workbook from the ADAL to MSAL recommendation page to dive deep into the list of ADAL applications listed on the recommendation details page. To use the workbooks for Microsoft Entra ID, you need a Microsoft Entra ID tenant with a P1 license.

 

Figure 1: ADAL apps sign-in data

 

Figure 2: Apps sign-in data

 

Plan to update your application

 

Get started by accessing the workbook to get a list of all ADAL applications and the details associated with them. Our migration guide walks you through all the steps to transition applications from using ADAL to using MSAL.

 

Neha Goel 

Senior Product Manager, Microsoft  

LinkedIn

 

 

Read more on this topic 

Migrate applications to the Microsoft Authentication Library Manage workbooks How to use Entra Workbooks Migrate from ADAL to MSAL Entra Recommendations

 

Learn more about Microsoft Entra  

Prevent identity attacks, ensure least privilege access, unify access controls, and improve the experience for users with comprehensive identity and network access solutions across on-premises and clouds. 

Microsoft Entra News and Insights | Microsoft Security Blog⁠Microsoft Entra blog | Tech CommunityMicrosoft Entra documentation | Microsoft Learn Microsoft Entra discussions | Microsoft Community  

Dark Matter Labs

Diving Deep into the Deep Code (Pt2): Unraveling the Knotted Problem Space of Data

Image credit: Madelyn Capozzi ChatGPT is everywhere. The AI chatbot exploded into the mainstream almost overnight, reaching 100 million monthly users just two months after it was launched back in November 2022 (Reuters, 2023). Since then, ChatGPT has been enlisted to do nearly everything, from writing code, to passing high school exams, to even crafting a Bible verse about how to remove a pean
Image credit: Madelyn Capozzi

ChatGPT is everywhere. The AI chatbot exploded into the mainstream almost overnight, reaching 100 million monthly users just two months after it was launched back in November 2022 (Reuters, 2023). Since then, ChatGPT has been enlisted to do nearly everything, from writing code, to passing high school exams, to even crafting a Bible verse about how to remove a peanut-butter sandwich from a VCR. OpenAI — and Alphabet, Meta, Microsoft and a handful of startups — built these impressive machine learning systems, yet they didn’t do it alone: it wouldn’t have been possible without the wealth of data from our digital commons (and the hard, extractive and invisible labor of thousands of data labelers). In fact, your comments on Reddit or X may have been critical in building ChatGPT and will likely be used to build more AI systems in the future. This calls into question the usage of property rights as a framework for data and our digital economies: should you get a share of the profits from the tech innovations your data helped create? Can you say no to your data being used for certain purposes? How do we balance individual rights with collective responsibilities?

This blog explores the knotted problem space of data governance, and how traditional property rights frameworks are increasingly ill-suited to deal with the distributed contributions, impacts, and risks of data and the emerging technologies it makes possible. It argues that rather than governing data, we need to shift towards democratic governance of access to data and what it is used for to unleash its full potential to deliver for public good.

This is Part 2 of our deep dive into property rights (in Part 1 we peeled back the layers of the housing crisis) in which we explore its role and potential in dealing with today’s systemic challenges. We do this by looking through the lens of affordances and disaffordances: what do our property systems allow us to do, see, be and imagine? What incentives do they create and what priorities do they assign? And consequently, how could their redesign recast our relationship with each other and with our natural and built environments, and create a pathway to systemic thriving?

These blogs are intended both as an exploratory journey and a rallying cry to policymakers and strategic risk holders to deepen their commitment to systemic innovation and invest in compelling new visions for the future to be able to overcome political and institutional lock-ins and match the necessity of the contexts we face.

DATA GOVERNANCE: A KNOTTED PROBLEM SPACE

Everything is made of data. Data has become and will continue to be at the very heart of our societies. As Bing Song stated, “it is much like the air we breathe, water we drink and electricity we depend on”. The immense social and economic importance of data presents one of the most important governance challenges of our time, yet to many, the nature of the problem space remains opaque.(1)

In The Age of Surveillance Capitalism, Shoshana Zuboff likens our inner lives to a pre-Colonial continent, invaded and strip-mined of data by Big Tech, driven by an insatiable profit motive that demands the extraction of all data, from all sources, by any means possible. This entails the datafication and “surveillance of people, places, processes, things, and relationships among them” (van Dijck, 2014). Data is used to profile and target people, to optimize systems, to control outcomes. This might be as “harmless” as personalized ads or diet trackers. Or, as worrying as police using cameras with facial recognition software. Or, as dystopian as Amazon using wristbands to track where their warehouse workers are at all times and provide haptic feedback when they work inefficiently.

Our view is that the problem with data is not datafication per se — although we recognize that seeing the world in a way that asserts everything is data shapes how we understand and interact with the world in ways that “sort it into categories and norms, to render it legible and observable, to exclude other metrics and methods of knowing it” (Bowker and Star, 2000). Rather, we think that data’s potential to deliver collective value is currently curtailed by extractive and exclusive property and ownership logics that optimize for private financial value, control, and rent-seeking. By trying to govern data through property rights, we have done it and ourselves a disservice, limiting the actions, behaviors and social imaginaries it has allowed for, and resulting in the worrisome reality of Big Tech, Big Brother and “Big Other”.

UNTANGLING THE KNOT

So what precisely are the actions, behaviors, and social imaginaries that property and ownership afford us when it comes to our data and digital economies? What are the knotted challenges of how we govern data today?

Current data governance allows for the privatization of public value and contributions …

The introduction already hinted at it, but if there’s one thing that property is inherently bad at it is accounting for multiple, interconnected contributions and value flows. Nowhere is this more evident than with machine learning systems like ChatGPT. Property affords only a reductive mode of information processing and organizing in which complexity and entanglement are reduced to systems of low information burdens. Historically, property rights were designed to provide security, encouraging the development of land and resources by clearly delineating boundaries between owners and non-owners and communicating rights and entitlements. After all, “the earth would not produce her fruits in sufficient quantities, without the assistance of tillage: but who would be at the pains of tilling it, if another might watch an opportunity to seise upon and enjoy the product of his industry, art, and labour?” (Blackstone, 1803) Yet this mechanism is inadequate when the value produced comes not from an individual owner but from the “collective intelligence” of humanity. Machine learning models use vast databases of information (text, code, images) scraped from the internet — all of which is part of the digital commons contributed to by many. While much of the value derives from the commons (2), the profits of the models and their applications are disproportionately — if not singularly — captured by those who create them, rather than being reinvested into the commons. Current property rights do not create obligations towards third parties or entitlements for those who contributed, failing to ensure that value from our digital economies benefits the broader community.

… and fails to recognize data is not about individual actors but relationships among them

To address the private capturing of data’s value many have hailed individual data ownership as a precondition to return “control” to the individual. Senator John Kennedy (R-LA) introduced the “Own Your Own Data Act of 2019,” which declares that “each individual owns and has an exclusive property right in the data that individual generates on the internet” and requires that social media companies obtain licenses to use this data, while Alexandria Ocasio-Cortez has also argued for data ownership as a solution to inequality, tweeting: “the reason many tech platforms have created billionaires is [because] they track you without your knowledge, amass your personal data & sell it without your express consent. You don’t own your data, & you should.” The problem is that solving data governance through individual property rights is like trying to force a square peg in a round hole. Data’s inherent qualities make it impossible to be treated like any other asset under property rights. Data’s intangibility and ubiquity mean that it has little use or exchange value in the form of small amounts of raw information. Data’s value is derived from economies of scale. That means data is always about relationships, not the individual. Information is useful (or harmful) because it can be used to infer insights about — and thus make decisions affecting — multiple people. This inherent relationality means that property rights, with their singular lens of bounded individualism, cannot effectively nor legitimately govern data. Therefore, Matt Prewitt from RadicalxChange has argued that, “data cannot be owned, but must be governed.”

Our individual data, like a single gust of wind, says little on its own. Its true value comes from its role in observable patterns across larger datasets. Once interpreted, these patterns can be used to inform all kinds of interventions, including new forms of top-down control.

Current data governance prioritizes exchange value over use value …

In our blog on the housing crisis, we saw that house prices are largely driven by land values, and that these have risen at a much faster rate than incomes in nearly all advanced economies (Piketty, 2014). The consequence has been increasing rentier returns to land- and homeowners on the one hand, and a major housing affordability crisis on the other. Increasingly, we are seeing that our digital economies are getting trapped in similar cycles of “data rentiership”. While value extraction in housing happens through access to scarce resources, i.e. land, in digital economies it happens through control of the innovation process (Mazzucato, Collins and Gouzoulis, 2020). Big Tech has been able to monopolize ownership and control over personal data, extracting financial value without creating any additional use value. Aggregation and accumulation of data has become a business model in itself (Zuboff, 2018). Property rights have allowed companies to extract rents from the use of their monopolistic platforms, either through service providers or consumers, without creating an incentive to deliver innovation that benefits society. The result: more value is being extracted from our data economies than added to it.

… and has permitted the centralization of control, and thus risk

Another parallel we can draw between land and data governance is by looking at how property rights have permitted small privileged classes of “owners” to exercise control. Data is not just a means of wealth, it is also a means of governance. Data ownership has systematically disempowered everybody except for a handful of companies that amass the most data. The risks concomitant with this power asymmetry are felt as micro-massive impacts in our daily lives, our democracies, and our economies. Think of Cambridge Analytica and how it leveraged the personal data of millions of Facebook users without their consent for political advertising purposes to try to influence future political, and economic, outcomes. Meanwhile, centralized systems of control, verification and storage are also more vulnerable to large-scale data breaches, with downstream effects that may cause mass destabilization, creating ripple effects across global supply chains and disruptions to essential services and infrastructure, such as healthcare and food systems. The WannaCry Ransomware Attack, for instance, disrupted over a third of NHS Trusts in England, forcing emergency rooms to divert patients and cancel surgeries.

Image credit: Webroot/BBC

Data has unleashed a new kind of power that cannot be adequately governed by centralized public or private systems

Property rights have long been the primary mediator between public and private power. Yet with the rise of the predictive and market-making power of data we are seeing that the state’s role, as both guarantor and regulator of property, is becoming increasingly unworkable. States are not only overpowered by the property interests of tech companies, they also are struggling to intelligently and effectively regulate the increasingly complex systems underpinning our digital economies. While intellectual property rights owe their existence to law and the willingness of states to back them with their coercive powers and render them enforceable, the power of data is not dependent on the state. Companies have mostly relied on technological barriers to limit access to the data they have amassed. In fact, they have benefited precisely from the inability of the state to regulate, taking advantage from the ambiguity that has surrounded data ownership. This new reality in which the power of data has emerged as a wholly new form of institutional power, outside of the full control of state or private actors, calls for new governance capabilities that ensure this power is held accountable and directed towards public good.

CRAFTING PATHWAYS TOWARDS NEW AFFORDANCES

This process of untangling the knot of challenges surrounding data governance shows how property and ownership are increasingly insufficient. It shows the need for new institutional mechanisms that provide a wider scope of affordances, allowing for new ways of relating to our data and governing what it is used for. This would need to involve decoupling data from both control and extractive dynamics in favor of stewardship, responsibility, and relationality to ensure it delivers new levels of public value and innovation in ways that are all-together more equitable, accountable, and distributed.

At the heart of this shift in governance is fundamentally a different way of thinking about data itself. Because data is always about relationships among actors, our assumption of individual rights needs to make way for collective responsibilities and agency. In this way, the inequality and power asymmetries that have emerged in today’s data landscape are not about reclaiming control or individual repayment, but about the collective determination of outcomes for which data is developed and used. Rather than optimizing for individual and singular interests — of “data owners” or “data subjects” — we need to recognize and balance the full spectrum of overlapping and at times competing interests, risks, and value flows implied in data governance and optimize for the potential of data itself. As such, data can be transformed for what is now a “dead” financial asset into a generative agent, which unlocks value not just for the very few but for our collective well-being.

In this new paradigm, data can be thought of as a river flowing through our digital economies. It emerges as an active agent in a complex web of relationships, with the autonomous ability to effect change — both positively and negatively — thus creating inherent responsibilities. Governing these rivers of data is akin to the nonexclusive rights riparian owners have over a river that runs by their land. The interdependence between those upstream and downstream requires us to take into account conflicting interests and needs, and implies imposing certain restrictions for the public good. Organizations that want to benefit from the bounty of these rivers must act as stewards. Rather than hoarding the water or excluding others from its use, their role is to support the river’s full potential to deliver the broadest possible value. While property rights once made data governance a matter of control, this new paradigm shifts the focus to collective care.

There are numerous examples of public data sets and new approaches that center distributed governance and seek to unlock data’s potential for public good. Barcelona, for instance, has implemented a civic data trust to manage its data commons, allowing citizens to have a great say over what data is collected and for which purposes while also supporting experiments in citizen-led decision-making through open-source platforms. Or OpenStreetMap, a network that has dedicated itself to developing and distributing free geospatial data in ways that would not easily be accomplished by individual mappers alone.

Adding to this work, Dark Matter Labs has been exploring the potential of self-owning data governed through a network of digital micro-trusts with Care Sense, a new Proof of Possibility developed as part of Property & Beyond Lab. We are envisioning self-owning urban sensing infrastructure, such as street cameras, that leverage data to dynamically assess and respond to contextual care needs in the city, by either enabling direct responses (e.g. alerting emergency responders) or generating insights that other stakeholders can act upon (e.g. revised road policies). By making the sensing data self-owning, we allow it to flow more freely across the system, unlocking broad public value and reducing risk through distributed governance, verification and accountability mechanisms. The foundational infrastructure for this self-ownership is built on a network of digital micro-trusts which automatically release permissions for data access, manage use cases, and maintain registries of permissions.

OUR CALL TO ACTION

Just as with the housing crisis, the knotted problem space of data demands a deep-code perspective to reveal how seemingly discrete challenges are in fact interrelated and interdependent, and are rooted in an outdated systems-logic based on individual ownership. What becomes abundantly clear is that property rights in their current form are insufficient to address the privatization of public value, to deal with the inefficiencies of use and rent-seeking behaviors in our digital economies, or to manage distributed contributions and value flows of emerging technologies. Privacy regulations like GDPR or proposals for individual data ownership are welcome intermediate solutions but fail to recognize that the challenge of data governance can simply not be resolved through the lens of individual rights and control logics. To allow data to be used to their full potential, and support the democratization of our digital economies and better governance of today’s complex realities, we are in urgent need of new institutional capabilities (governance frameworks, legal mechanisms, interfaces) that allow us to relate differently to data as a relational and critical infrastructure.

Notes

(1) A critical part of the problem space we are choosing not to cover in this blog is that of AI’s environmental impacts — and that of tech and data economies more generally — and the governance challenges surrounding this. We recognize that the growing demand for data and AI tools carries immense environmental costs, from the extraction of critical minerals for the development of hardware, to the enormous energy consumption for the training of AI models and water usage for cooling data servers. Google and Microsoft both have reported significant increases in emissions as they have integrated AI throughout many of their core products. There is a real risk that big data and tech companies are on the path to become greater emitters than fossil fuel companies; not just from their direct environmental impacts but from the second and third order effects of AI on total global consumption from higher overall productivity. The planetary-level challenges surrounding AI require a deep and nuanced exploration that is beyond the scope of this blog.

(2) We recognize that the value of AI systems does not just come from the digital commons, but also from the algorithm that is able to process high volumes of data, the servers which work on instant speed to respond to requests, the design used to teach AI English or filter out violent and abusive content, the tedious labor involved in filtering through and labeling data, and much much more.

Did this blog resonate with you and are you interested in getting involved? We see a few potential pathways for engagement:

I am a funder or strategic risk-holder and I would like to understand how I can support

We are hoping to convene with funders and strategic risk-holders who are keen to explore and actively support a strategic portfolio of real-world demonstrations of system-level solutions to data governance.

Jayne@darkmatterlabs.org

I have experience or expertise that I think might be useful and I am interested to see how I could contribute to making this thinking happen in the real world

We are looking to connect with potential collaborators who would be keen to build our Proofs of Possibility like Care Sense in real-world contexts.

alexandra@darkmatterlabs.org

This blog was written by Alexandra Bekker (alexandra@darkmatterlabs.org) in collaboration with Jayne Engle and Indy Johar, building on work by Gurden Batra, Eunsoo Lee and Shuyang Lin.

Property & Beyond Lab is currently supported by Omidyar Network and Rockefeller Foundation, and in collaboration with RadicalxChange Foundation and a Stanford University research team.

Diving Deep into the Deep Code (Pt2): Unraveling the Knotted Problem Space of Data was originally published in Dark Matter Laboratories on Medium, where people are continuing the conversation by highlighting and responding to this story.


Elliptic

Navigating the APAC stablecoin regulatory landscape with Ecosystem Monitoring

Over the past year, the Asia-Pacific region has seen important developments taking place around the regulation of stablecoins. From Singapore to Hong Kong to the Philippines and beyond, regulators have been crafting rules that will require stablecoin issuers to meet extensive and rigorous standards designed to ensure financial stability, protect consumers, and mitigate risks related to

Over the past year, the Asia-Pacific region has seen important developments taking place around the regulation of stablecoins. From Singapore to Hong Kong to the Philippines and beyond, regulators have been crafting rules that will require stablecoin issuers to meet extensive and rigorous standards designed to ensure financial stability, protect consumers, and mitigate risks related to financial crime. 


Verida

True Web3 Ownership Starts With Verifiable Credentials

Verida’s credentials library helps developers future proof their applications and master digital identity True Web3 Ownership Starts With Verifiable Credentials The world is moving online at a rapid pace, and with this transition comes the need for new processes and standards to identify ourselves. Verida harnesses Web3 infrastructure to empower people with verifiable, self-sovereign identiti
Verida’s credentials library helps developers future proof their applications and master digital identity True Web3 Ownership Starts With Verifiable Credentials

The world is moving online at a rapid pace, and with this transition comes the need for new processes and standards to identify ourselves. Verida harnesses Web3 infrastructure to empower people with verifiable, self-sovereign identities, removing the reliance on centralized organizations.

To understand how Verida achieves this, we first need to know what credentials are and how they’re managed.

Society and credentials

From ancient civilizations, to the digital worlds of Web2 and Web3, it’s impossible to navigate the world without trustworthy credentials. Typically speaking, you need physical documents such as diplomas, certificates, and identity cards to access products and services in modern society. Venture back a little further and people used stone tablets, wax seals, and papyrus scrolls to prove identities and convey authority.

In order for a reliable credential system to work, there are multiple elements that need to be in place.

Issuers are the recognised authorities that can provide such documents, be it a government authority in the case of an ID card, or an educational institution for degrees and diplomas. Not only are these recognised by law, their concepts are well-known and understood in all societies around the world.

Holders are the people, businesses, organizations, and institutions, who receive and use credentials to access products and services or to prove specific facts and achievements. Holders typically have little say over their choice of credential method and how the underlying system works.

Verification occurs when a credential is controlled. For most of humanity, this has been a manual process, one that’s only as reliable as the person or system that’s doing the controlling.

Photo by Liam Truong on Unsplash

As computers came on the scene, and the internet gained adoption, a digital transformation began to occur. Trust began to be built in the online world. Issuers moved online, bringing a new world of digital certificates, signatures, and identification.

From the stone tablets of ancient civilization to single sign-ons with Google, a number of factors have remained fairly constant. Striving for a faster and easier way to issue and verify credentials brings with it an increasing number of tradeoffs in security and privacy. Falsification has gotten easier, while centralized platforms continue to become targets for corruption, data breaches and internal misuse.

Enter Web3 and verifiable credentials

As the internet moves from Web2 to Web3, so too does it enter the “Own” era of “Read-Write-Own”. In order to allow the Web3 ecosystem to truly flourish, self-sovereign ownership of money and digital assets must be accompanied by self-sovereign ownership of personal credentials.

To tie down decentralized blockchains to physical documents would be counterproductive to their progress, while continuing to rely exclusively upon centralized systems defeats the purpose of self-sovereignty.

When implemented correctly, Web3 can be complemented by verifiable credentials: A suite of decentralized solutions to help us navigate the increasingly blurred lines of the physical and the virtual worlds, in a safe and secure manner.

Key use cases for Web3 verifiable credentials

Verifiable credentials offer the possibility to generate an undeniable proof that you, the person signing a document, requesting a service, or purchasing a product, have at one point in time proven your identity, or ownership of a particular asset, and that this document is still valid.

In practice, the act of providing this proof without granting access to sensitive information should be more than enough to allow for secure digital interactions to take place.

VC triangle of Trust by Daniel H Hardman, licensed under CC BY-SA 4.0

There are various use cases for verifiable credentials; the following are complementary to an empowered self-sovereign Web3 user experience.

Reusable KYC and KYB for crypto projects

How many companies have you entrusted with your sensitive personal data? Doesn’t it leave a weird feeling in your stomach once you start counting them all up?

Most of these companies don’t need to know your date of birth, your personal ID number, or your home address. But they are required to follow protocol and ensure they’re not dealing with people linked to illicit activity. There’s a lot that can be improved here simply by abstracting the sensitive data away from the fact that someone’s credentials are legitimate.

As regulations weigh down on the crypto industry, a single, reusable zero-knowledge Know Your Customer (KYC) or Know Your Business (KYB) proof would enable traders to hop between exchanges at ease, streamlining their user experiences and downscaling the risk of sensitive data being mishandled.

Proof of real world assets

How many people have unlawfully lost their properties and possessions during times of war? And what assurances did they have of regaining their land, or the value of their assets, once new borders were drawn up. Under a centralized structure, the new power in control can easily neglect or deny the contents of previous property records.

An open, decentralized blockchain-based ledger could fix this as it would provide an undeniable, timestamped proof of ownership. It’s the perfect platform to store and access proofs of verifiable credentials.

The use case of proving ownership of real world assets is of course not limited to times of war. Many institutions are beginning to navigate the world of asset tokenization, and verifiable credentials provide the ideal digital structure to safeguard their integrity.

Proof of personal data and reputation

Misinformation and impersonation are age-old human problems. More recently, these issues have been inflated due to the rise in popularity of social media platforms, with far reaching impact on political scenes around the world. Add to this the growing threat of AI-generated deep fake content and the line between what’s real and what’s fake becomes impossible to discern.

Human beings linking their accounts to verifiable credentials is potentially the only way to move forward in a digital world increasingly shared with bots, scammers and manipulative organizations.

How Verida empowers builders with verifiable credentials

Think of Verida as a base layer upon which a variety of other networks, blockchains and applications can tap into for solutions linked to identity and verification.

The advantage of this modular flexibility is that users can navigate both Web2 and Web3 applications and services while receiving, storing and even sharing credentials all in one place: their Verida Wallet. Just as the leather wallet in your pocket might hold your cards, tickets, photos, and receipts, as well as your cash, the Verida wallet aims to bring far more personal utility to your digital experiences than what a typical crypto wallet might offer.

The ability to access multiple identity networks and even bridge credentials between applications and blockchains is made possible thanks to Verida’s Verifiable Credentials Developer SDK. Verida’s open platform is purpose-built to help developers choose the right credential standard to suit their applications. Depending on the standard and underlying technology, credentials can be verified both off-chain or on-chain.

Verida enables the issuance of verifiable credentials directly on the Verida network, while developers can choose from multiple credential libraries to meet the requirements of their applications.

Verida supports multiple integrations but in theory every issuer or credential provider can work with Verida. The following are some of the key credential standards supported by Verida’s growing ecosystem.

W3C Standard DID-JWT-VC

Verida’s credentials are built to meet the specifications of W3C’s DID-JWT-VC compliance standard. Verida provides an SDK to create and issue these types of credentials and supports the storage and sharing of these credentials with decentralized applications (dApps).

The W3C’s DID standard is an extremely important protocol in the identity developer space and is likely to form the basis for verification in regions like the European Union.

Privado ID (formerly Polygon ID)

The Verida Wallet enables users to interact with Privado ID’s impressive zero-knowledge tech stack. Privado ID has grown a large ecosystem of applications focused on improving user experience and security in the world of decentralized identity.

Source: Verida on Medium

Through its modularity and flexible platform, Verida sees itself not as a competitor, but as a partner and valuable addition to the Privado ID ecosystem. This demo video shows the seamless user experience when connecting a Verida Wallet with Privado ID to issue, store and verify zero-knowledge credentials.

zkPass

The zkPass protocol enables the conversion of private data from standard https websites into zero-knowledge proofs, without the need for additional API integrations. This enables users to create a proof of any content located on a secure website.

Examples include logging into a bank and proving you have over $10k in your account, without disclosing any other financial information. Similarly, you can login to a crypto exchange and prove you have completed a KYC check, without disclosing any personal information.

Source: Verida on Medium

Verida recently announced a Verida Missions partnership with zkPass. Such a partnership demonstrates the importance of interoperability and modularity as it bridges the gap between Web2 and Web3 user data in a privacy preserving manner.

To learn more about this integration, visit the zkPass section of Verida’s credentials documents.

Reclaim Protocol

Reclaim Protocol’s verifiable credentials can be received and stored on the Verida network. Thanks to the Verida proof connector and Reclaim’s zero-knowledge technology, users can reply to a verifier’s proof request in a privacy preserving manner.

Reclaim Protocol has a large library of JSON credential schemas that meet the identity requirements of various services within Web2 and Web3.

Source: Verida Developer Docs

Alt text: Screenshots of various applications which can be accessed through Reclaim Protocol’s verifiable credentials service

To learn more about this integration, visit the Reclaim Protocol section of Verida’s credentials documents.

cheqd

Verida Wallet users can experience full support for verifiable credentials issued through cheqd’s enterprise-grade Credentials as a Service product. The combination of Verida’s user-friendly wallet, and decentralized storage and backup solution, together with cheqd’s infrastructure for DIDs and DID-linked resources makes for a robust tech stack in the decentralized identity space.

Source: Verida Developer Docs

The partnership with cheqd opens the door to additional collaborations such as FinClusive’s reusable KYC/KYB credential solution. FinClusive’s end-to-end integration with cheqd and Verida not only provides users with resubility, it also ensures client privacy, portability, and embedded compliance controls.

Source: cheqd.io blog

The FinClusive use case is but one example of a streamlined off-the-shelf identity solution that developers can expect to access once they explore the Verida network.

An open building site for dApp developers

Having Verida as the base layer for your application can future proof your implementation. Not only does it grant instant access to the Verida SDK integration, it also provides developers with the flexibility to mix and match multiple credential standards for different use cases.

Verida is on the forefront of decentralized digital identity, offering an open and collaborative ecosystem, not a siloed platform. Numerous plug-and-play KYC and KYB solutions are readily available to dApp builders, which can expedite the process from proof of concept to legitimate product or service.

To learn more about Verida’s framework for verifiable credentials, head over to the Credentials page on our Developer Docs.

And if you’re a developer looking to future proof your application with a seamless identity solution, get in touch with our experts in our Discord server or register your project for the Verida Ecosystem here.

We’re excited to see what you’re building!

About Verida

Verida is a pioneering decentralized data network and self-custody wallet that empowers users with control over their digital identity and data. With cutting-edge technology such as zero-knowledge proofs and verifiable credentials, Verida offers secure, self-sovereign storage solutions and innovative applications for various industries. Verida’s ecosystem of KYC partners and technologies are ideally suited to help Kima expand into new markets, streamlining processes and efficiency for compliant transactions. For more information, visit Verida.

Verida Missions | X/Twitter | Discord | Telegram | LinkedInLinkTree

True Web3 Ownership Starts With Verifiable Credentials was originally published in Verida on Medium, where people are continuing the conversation by highlighting and responding to this story.


Metadium

Metadium Explorer Update

Dear Community, We would like to announce updates to Metadium Explorer. This update was made to reduce site load and display data more efficiently. Details are as follows: Internal Transaction — Detail Mode applied Update details Modified to display only cases where Create and Value exist in the Internal Transactions list. You can check all types of information by clicking the Deta

Dear Community,

We would like to announce updates to Metadium Explorer. This update was made to reduce site load and display data more efficiently. Details are as follows:

Internal Transaction — Detail Mode applied

Update details

Modified to display only cases where Create and Value exist in the Internal Transactions list. You can check all types of information by clicking the Detail Mode button located at the top right of the list.

Reason for update

To resolve the load on retrieving all types of information due to the increase in Metadium’s internal transaction data.

메타디움 익스플로러의 업데이트 소식을 전해드립니다. 이번 업데이트는 사이트 부하를 줄이고 더욱 효율적으로 데이터를 표시할 수 있도록 진행되었습니다. 세부사항은 아래와 같습니다.

요약

Internal Transaction — Detail Mode 적용

업데이트 상세

Internal Transactions 리스트에 Create와 Value가 존재하는 경우만 표시되도록 수정되었습니다. 리스트의 우측 상단에 위치한 Detail Mode 버튼을 클릭하면 모든 타입의 정보를 확인할 수 있습니다.

업데이트 사유

메타디움의 Internal Transaction 데이터 증가로 인해 모든 타입의 정보를 불러오는 데 부하가 발생하여 이를 해소하기 위함입니다.

-메타디움 팀

Website | https://metadium.com

Discord | https://discord.gg/ZnaCfYbXw2

Telegram(EN) | http://t.me/metadiumofficial

Twitter | https://twitter.com/MetadiumK

Medium | https://medium.com/metadium

Metadium Explorer Update was originally published in Metadium on Medium, where people are continuing the conversation by highlighting and responding to this story.

Thursday, 25. July 2024

KuppingerCole

Passwordless 360°: A Game-Changing Approach to Authentication within Your Business

In today's digital landscape, traditional password-based authentication is increasingly challenged by the need for more secure, efficient, and user-friendly methods. The shift towards passwordless authentication is reshaping how users interact with digital services, promising enhanced security and improved user experiences across various sectors. Modern technology offers innovative solutions to

In today's digital landscape, traditional password-based authentication is increasingly challenged by the need for more secure, efficient, and user-friendly methods. The shift towards passwordless authentication is reshaping how users interact with digital services, promising enhanced security and improved user experiences across various sectors.

Modern technology offers innovative solutions to implement passwordless authentication comprehensively. By leveraging biometrics, passkeys, hardware tokens, and behavioral analytics, organizations can create a seamless and secure authentication process that spans across different user groups, including consumers, employees, and partners.

Alejandro Leal, Research Analyst at KuppingerCole, will explore the technological advancements and market dynamics driving this significant shift. He will offer practical insights into deploying passwordless solutions and examine the challenges associated with traditional authentication methods.

Haider Iqbal, Director of Product Marketing for Identity & Access Management at Thales, will present Thales' new approach that allows organizations to use one solution provider to meet all their authentication needs. He will demonstrate how this 360° approach can be implemented across customers, workforce, business customers, partners, and suppliers' ecosystems.




auth0

Secure Node.js Applications from Supply Chain Attacks

Guidelines and security best practices to protect from third-party threats
Guidelines and security best practices to protect from third-party threats

Nuggets

Nuggets Partners With Carahsoft to Bring Private, Reusable Identity and Passwordless Solutions to…

Nuggets Partners With Carahsoft to Bring Private, Reusable Identity and Passwordless Solutions to US Government Agencies The partnership advances interoperability and security of existing CIAM systems through trusted and verified decentralized identity and verifiable credentials We’re delighted to announce that we have partnered with Carahsoft Technology Corp., The Trusted Government IT Solution
Nuggets Partners With Carahsoft to Bring Private, Reusable Identity and Passwordless Solutions to US Government Agencies

The partnership advances interoperability and security of existing CIAM systems through trusted and verified decentralized identity and verifiable credentials

We’re delighted to announce that we have partnered with Carahsoft Technology Corp., The Trusted Government IT Solutions Provider®. Through this partnership, Carahsoft will serve as Nuggets’ Master Government Aggregator®, making our products available to the Public Sector through Carahsoft’s reseller partners and NASA Solutions for Enterprise-Wide Procurement (SEWP) V, Information Technology Enterprise Solutions — Software 2 (ITES-SW2), National Association of State Procurement Officials (NASPO) ValuePoint and OMNIA Partners contracts.

With the growing need for trusted digital solutions, decentralization offers unmatched privacy and security. Partnering with Carahsoft will amplify our reach within the Government, providing them with our cutting-edge private decentralized identity solutions. We are excited to leverage Carahsoft’s extensive network and expertise to enhance operational efficiency and security for our customers.

For some background, Public Sector organizations face unique challenges surrounding credentials, identity reusability, and interoperability. They need tools to combat the ever-evolving issues around fraud, AI, deep fakes, ransomware, and data privacy that will also deliver a seamless and frictionless user experience while increasing operational efficiencies.

Government agencies face two significant obstacles to data integrity: the increasing cost and challenges associated with data privacy and the acceleration of sophisticated scams and rampant fraud. Nuggets solves for both. Our fully decentralized wallet and platform protect organizations from data breaches, ransomware, and fraud while ensuring digital identities always remain verified, private, and secure.

Today’s first-generation systems contain enticing silos of sensitive personal data, making them a prime target for hackers and data breaches. These siloed components from multiple service providers can be difficult to integrate, creating poor visibility for individuals who don’t have an integrated stack.

Additionally, legacy systems have created a host of significant issues both in terms of security and business objectives and are hindering growth. They are exposed to high levels of fraud, have low assurance and utilize multiple authentication factors while their authorization controls are often limited.

By implementing Nuggets across existing customer identity and access management (CIAM) solutions, agencies can adopt a more modern and adaptive infrastructure, enabling transformational shifts.

Brian O’Donnell, Vice President of Cybersecurity Solutions at Carahsoft said: “As Government agencies face growing demands for secure and efficient digital processes, Nuggets’ advanced technology offers an ideal solution. Together with our reseller partners, we are dedicated to providing these innovative tools to enhance security and decrease fraud with a premier user experience.”

Nuggets is a Decentralized Self-Sovereign Identity and payment platform that guarantees trusted transactions, verifiable credentials, uncompromised compliance, and the elimination of fraud — all with a seamless user experience and increased enterprise efficiencies.

We’re building a future where digital identity is private, secure, user-centric, and empowering.

We’d love to hear from you if you want to enhance your data privacy and security offering.

You can learn more about our solutions here or get in touch with us here.

Nuggets Partners With Carahsoft to Bring Private, Reusable Identity and Passwordless Solutions to… was originally published in Nuggets on Medium, where people are continuing the conversation by highlighting and responding to this story.


Tokeny Solutions

Tokeny’s Talent | Omobola

The post Tokeny’s Talent | Omobola appeared first on Tokeny.
Omobola Giwa is Marketing Intern at Tokeny.  Tell us about yourself!

Hello, I am Omobola (most people prefer Bola because it’s shorter). I come from the vibrant city of Lagos, Nigeria, and I am married to my birthday mate—a delightful coincidence that makes our shared celebrations even more special. Now, we’re a trio with the addition of our lively and adorable 2-year-old son, who fills our lives with joy and laughter.

What were you doing before Tokeny and what inspired you to join the team?

I started my career in the legal field, having obtained a law degree from the University of Lagos, Nigeria, but the fast-paced world of business soon piqued my interest hence I transitioned into business development and project management roles, and spent a few fulfilling years doing these.

Upon relocating to Luxembourg, I seized the opportunity to deepen my business acumen by enrolling in a master’s program in Entrepreneurship and Innovation at the University of Luxembourg. As part of this program, I undertook a 3-month internship working in the marketing team at Tokeny, a decision that has proven to be both strategic and rewarding.

Choosing Tokeny for my internship was driven by a desire to explore new frontiers. Firstly, I craved uncharted territories. Fintech, especially tokenization, is a cutting-edge frontier, and I wanted to be a part of it! Secondly, the marketing role offered a fresh challenge. While I had engaged in marketing activities in my previous roles, I had never dedicated myself solely to this discipline. Finally, after exploring Tokeny’s website and employee testimonials, I was struck by the company’s people culture. The stories I read reflected a supportive and inclusive environment, perfect for professional growth.

How would you describe working at Tokeny?

My experience of working at Tokeny can be described as a unique blend of challenging work and supportive colleagues. Challenging because, even as an intern, you are entrusted with significant autonomy. You tackle diverse tasks, encouraging you to take ownership and innovate. This environment pushes you to grow and develop new skills continuously.

On the other hand, Tokeny is incredibly supportive. The company fosters a safe space where you are encouraged to give your best without fear of judgment. The sense of value and backing from the team is palpable. Colleagues are open, quick to acknowledge mistakes, and always ready to provide constructive feedback and assistance.

This culture of trust and collaboration not only boosts your confidence but also instills a strong commitment to excel and contribute meaningfully to the company’s success. It’s an environment where I felt empowered to go above and beyond.

What are you most passionate about in life?

Children are my greatest passion. Their innate curiosity, honesty, and unfiltered joy never ceases to amaze me. I am captivated by their simplicity and the way they view the world with such wonder and trust. Having children around brings me immense joy, and their genuine nature is both inspiring and heartwarming.

However, I am also deeply concerned about their vulnerability. I dream of one day establishing an entity that is dedicated to supporting underprivileged children, providing them with the basic needs and opportunities they deserve.

What is your ultimate dream?

My ultimate dream is to live a life filled with happiness and to be able to look back with no regrets. I aspire to create a meaningful and fulfilling life, both personally and professionally, where I can cherish every moment and be proud of the impact I’ve made.

What advice would you give to future Tokeny employees?

Embrace the challenges and opportunities that come your way. Tokeny is a place where you can grow and thrive if you are willing to take initiative and be open to learning.

What gets you excited about Tokeny’s future?

Tokenization is the future and Tokeny is part of the creators of that future—that in itself is exciting! We’re building the future of finance, and I can’t wait to see what incredible things Tokeny achieves next.

She prefers:

Coffee

Tea

check

None

check

Movie

Book

Work from the office

check

Work from home

check

Dogs

Cats

check

Call

Text

check

Burger

Salad

Mountains

check

Ocean

check

Wine

Beer

Countryside

check

City

check

Slack

Emails

check

Casual

Formal

check

Crypto

check

Fiat

check

Night

Morning

More Stories  Tokeny’s Talent|Thaddee’s Story 2 June 2022 Tokeny’s Talent|Alexis’ Story 26 October 2022 Tokeny’s Talent|Shurong’s Story 20 November 2020 Tokeny’s Talent|Cyrille’s Story 17 September 2021 Tokeny’s Talent|Nida’s Story 15 January 2021 Tokeny’s Talent|Xavi’s Story 19 March 2021 Tokeny’s Talent|Barbel’s Story 17 December 2021 Tokeny’s Talent|Ivie’s Story 1 July 2022 Tokeny’s Talent | Cristian 13 June 2024 Tokeny’s Talent|Tony’s Story 18 November 2021 Join Tokeny Solutions Family We are looking for talents to join us, you can find the opening positions by clicking the button. Available Positions

The post Tokeny’s Talent | Omobola first appeared on Tokeny.

The post Tokeny’s Talent | Omobola appeared first on Tokeny.


KuppingerCole

Sep 05, 2024: Authenticating Identities in the Age of AI: Strategies for Trustworthy Verification

In today's digital world, identity authenticity faces constant scrutiny, especially with the emergence of generative AI. However, modern tech provides innovative solutions. Chipped identity documents offer a trusted verification basis, embedding secure chips with verified data. Advancements like biometric authentication and blockchain-based verification ensure enhanced security and integrity. With
In today's digital world, identity authenticity faces constant scrutiny, especially with the emergence of generative AI. However, modern tech provides innovative solutions. Chipped identity documents offer a trusted verification basis, embedding secure chips with verified data. Advancements like biometric authentication and blockchain-based verification ensure enhanced security and integrity. With these innovations, organizations can navigate identity verification confidently.

Thales Group

Thales AVIATOR 200S system certified on Boeing 737

Thales AVIATOR 200S system certified on Boeing 737 Language English timothy.pike Thu, 07/25/2024 - 10:33 Thales, in partnership with Avionics Support Group (ASG) announces the availability of an FAA (Federal Aviation Administration) Supplemental Type Certificate (STC) for installation of the Thales AVIATOR 200S SwiftBroadband-Safety (SB-S) satcom system
Thales AVIATOR 200S system certified on Boeing 737 Language English timothy.pike Thu, 07/25/2024 - 10:33

Thales, in partnership with Avionics Support Group (ASG) announces the availability of an FAA (Federal Aviation Administration) Supplemental Type Certificate (STC) for installation of the Thales AVIATOR 200S SwiftBroadband-Safety (SB-S) satcom system on Boeing 737 NG and MAX aircraft - ST04585AT.

The AVIATOR 200S provides secure, segregated cockpit safety and Internet Protocol (IP) data and voice communications over the Viasat+Inmarsat SwiftBroadband-Safety (SB-S) satellite communications network, delivering near-global coverage. Viasat+Inmarsat SwiftBroadband has pioneered the IP-enabled flight deck, introducing a wide range of sophisticated solutions that increase communication and capability across Air Traffic Control, Aircraft Operational and cabin crew operations. The AVIATOR 200S is the only Class 4 Compact SB-S system on the market. It offers 20x faster data rates vs Classic Aero Satcom, as well as significantly-reduced installation requirements, power consumption, weight and cost.  AVIATOR 200S complies with ED-202A (DO-326A) Airworthiness Security Process SAL 3. It also complies with RCP240/RSP180 for reduced separation Oceanic operation as well as ICAO Class B Satcom RCP130/RSP160 for Domestic operation where approved. These features make it future proof, supporting i4D/4D trajectory based routing for continental and oceanic use (ATN/OSI and ATN/IPS).

“Over 7000+ 737NG and 1500+ 737MAX aircraft can now add Aviator S or replace a legacy satcom to enhance operational capabilities.  At a system weight of 6.1kg/13.5lbs the AVIATOR 200S offers the lowest SWaP of any next generation satcom system. This system has the potential to touch all aspects of flying an aircraft. This includes daily operational efficiencies, EFB applications, and real time Predictive Maintenance.” Robert Holcomb, Thales Aerospace Communications

All of these capabilities in one system enable much more streamlined and efficient flying for a greener future through reduced fuel consumption. 

“This ARINC 781 compliant system will be a robust solution now and in the future as the industry moves to dual dissimilar satcom as the Long Range Communication System to replace dual HF systems. It is ready today to fully support FAA NextGen and EASA Iris air traffic management requirements.” Marius Du Plessis, Thales Aerospace Communications Satcom Product Strategy

The STC, developed by ASG in partnership with Thales, covers the Boeing 737-700/-800/-900/-900ER/-8/-9 Series of aircraft.

“We are excited to work in tandem with Thales Aerospace Communications on this STC, and we look forward to helping bring this advanced satcom technology to the large Boeing 737 NG and MAX fleets worldwide.” Hugo L. Fortes, ASG’s Principal / FAA-DAR

/sites/default/files/database/assets/images/2024-07/AviatorBanner.PNG 25 Jul 2024 Satcom Aerospace Thales, in partnership with Avionics Support Group, has announced the availability of an FAA Supplemental Type Certificate for installation of the Thales AVIATOR 200S SwiftBroadband-Safety (SB-S) satcom system on Boeing 737 NG and MAX aircraft. Type News Hide from search engines Off

PingTalk

Unleashing the Power of Orchestration in Self-Managed PingFederate Environments

PingAM helps PingFederate customers orchestrate seamless and secure user experiences for complex use cases at scale.  

Lockstep

Making data valuable

In the latest episode of our podcast Making Data Better, George Peabody and I are joined by the former NSW Chief Data Scientist Dr Ian Oppermann. Among many other things, Ian helped set up the NSW Data Analytics Centre, he led the development of a series of superb papers on data sharing and the digital... The post Making data valuable appeared first on Lockstep.

In the latest episode of our podcast Making Data Better, George Peabody and I are joined by the former NSW Chief Data Scientist Dr Ian Oppermann. Among many other things, Ian helped set up the NSW Data Analytics Centre, he led the development of a series of superb papers on data sharing and the digital self at the Australian Computer Society, and he represented Australia in the development of the new international standard for Data Quality, ISO 8000.

We covered a lot of ground, but I especially liked our discussion with Ian about the value of data and monetisation of digital assets.

It’s almost a taboo topic. Surveillance capitalism has come to dominate and poison how people regard data monetisation yet there are legitimate interests in realising the value of data.

So we asked Ian about making data value a respectable idea; how do we make it governable?

In conversation, Ian highlighted the problem that “no one’s really sure how [data is] valuable”.

“We are, to this day, still without an accounting standard which values data. We don’t have a way of measuring it from a finance perspective”.

Yet we all know that data is an asset. Data contributes the majority of the value of digital companies like Facebook and LinkedIn.

So how do we measure data quality? Ian explains that “it’s not about simple things like format. It’s about the entire governance process of data. How does data flow into your organization? What are the controls and the chain of custody, the chain of authorizing frameworks?”

With government being the source of so much critical foundational data, George and I have been trying to conceptualise distribution networks to make verifiable data accessible at scale. Ian shares his practical experience about change management and the role of government.

Take a listen! And please let us know what you think.

The post Making data valuable appeared first on Lockstep.

Wednesday, 24. July 2024

TBD on Dev.to

7 Unexpected Ways Verifiable Credentials are Used Today

"Verifiable Credentials seem niche. Only a small group of technologists would be interested in this," a conference attendee declared after I enthusiastically described my company's work in the Global Payments and Self-Sovereign Identity (SSI) ecosystem. This comment gave me pause. Less than a year ago, I first encountered Verifiable Credentials (VCs) - a W3C standard for digital credentials tha

"Verifiable Credentials seem niche. Only a small group of technologists would be interested in this," a conference attendee declared after I enthusiastically described my company's work in the Global Payments and Self-Sovereign Identity (SSI) ecosystem.

This comment gave me pause. Less than a year ago, I first encountered Verifiable Credentials (VCs) - a W3C standard for digital credentials that state specific facts about individuals, organizations, or entities. My initial reaction was similar to the conference attendee’s response. However, after interviewing SSI builders on a weekly livestream, my perspective changed dramatically.

While VCs are relatively new and primarily explored by SSI enthusiasts, their benefits extend far beyond this group. Many of the services and products you use today already rely on them – even if you don't realize it.

From healthcare to content creation, finance to travel, VCs are changing how we share and verify information. Here's a look at seven real-world applications of VCs that may surprise you:

Mobile Driver’s Licenses The Pharmaceutical Supply Chain Content Creation and Generative AI Music Copyright Loan Applications Online Marketplaces and Catfishing Native Tribes Learn More 1. Mobile Driver’s Licenses

If you often misplace your wallet like me, there's good news — your state may support the use of Mobile Driver's Licenses (mDLs). Louisiana was the first state to implement mDLs in 2018. Since then, mDLs have been gaining traction across the United States. With mDLs, residents of participating states can store a digital version of their driver's license on their smartphones. This allows users to leave their physical ID at home and use their phones for identification in various situations, including travel, age-restricted services, voting, and law enforcement interactions.

VCs are the technology behind mDLs because they offer features like revocation, authentication, expiration, and tamper evidence. If you use an mDL, you're using VCs!

Source: https://lawallet.com/

2. The Pharmaceutical Supply Chain

Dr. Carsten Stöcker, CEO of Spherity, introduced me to a fascinating application of VCs in the pharmaceutical industry. Pharmacies often trade medicines with each other to maintain their stock, but this exchange requires complex verification processes. They must verify the legitimacy of three key elements:

the medicine, the organization they're trading with, the provider making the trade

Caro.vc, a Spherity company, employs VCs to simplify this process and reduce errors. Their solution allows pharmacies to quickly and securely verify all these elements, ensuring the integrity of the pharmaceutical supply chain.

To learn more about this use case, check out this discussion between Dr. Carsten Stocker and the Developer Relations Team at TBD:

3. Content Creation and Generative AI

Content creation has played a considerable role in our online lives for the past few years. For some, it has become a career, launching individuals into fame. However, whether you're an artist, seamstress, or photographer, people often steal and claim work that you made.

Additionally, a new form of content creation has hit the scene: generative AI. While many use it for productivity, others exploit it to spread misinformation or generate false images and videos in the likeness of others.

Organizations like Adobe, BBC, Microsoft, Sony, and Nikon formed the Content Authenticity Initiative and the Coalition for Content Provenance and Authenticity to address these issues. These groups developed Content Credentials, which use VCs to tackle the challenges of content theft and misuse of AI-generated content, aiming to protect creators’ rights.

Source: https://contentauthenticity.org/blog/community-story-wrapt

Click this link to read the case study, view the above image, view the above image, and view the image's content credentials.

4. Music Copyright

Musical artists typically don't own their masters, meaning they lack full control over their recordings when signing a record deal. Today, more musical artists are opting to own their masters to regain control over the distribution of their work. Cole Davis describes the publishing and distribution of music as a "music supply chain" and observed a disjointed process involving agreements through text messages, scattered emails between lawyers, and inconsistent procedures.

To address these issues, Davis built Switchcord, using VCs to provide cryptographic proof of who was involved in creating a song, when it happened, and what contracts were signed. This ensures all participants receive proper credit and compensation, preventing false claims.

To learn more about this use case, check out this discussion between Cole Davis and the Developer Relations Team at TBD:

5. Loan Applications

The U.S. credit system requires residents to accumulate debt as a prerequisite for obtaining loans, leading to more debt. Recent data from the Federal Reserve Bank of New York underscores this systemic issue, revealing that 1 in 5 applicants for mortgages, car loans, or other loans were rejected — the highest rate in five years.

FormFree is addressing this problem using the Web5 SDK to provide VCs for loan borrowers through their Passport product. Their approach involves creating an anonymized, tamper-proof credit profile as a VCs for lenders to review and make offers, aiming to put power back in the hands of the borrower.

To learn more about this use case, check out this discussion between the FormFree team and the Developer Relations Team at TBD:

6. Online Marketplaces and Catfishing

Unfortunately, online marketplace scams are common. From purchasing a car to renting a home or adopting a pet, you can buy and sell almost anything online. However, there's no foolproof mechanism to ensure the seller is trustworthy.

With the rise of social media, catfishing — where a person pretends to be someone they're not while online dating — has also increased dramatically. While many believe they're not susceptible, 23% of online dating participants reported being catfished, and 41% of catfish victims are between the ages of 18 and 34.

Jeffrey Schwartz created Dentity to reduce the frequency of scams and catfishing incidents. Dentity uses VCs to verify individuals on any platform, from dating apps to online marketplaces.

To learn more about this use case, check out this discussion between Jeffrey and the Developer Relations Team at TBD:

7. Native Tribes

Special Economic Zones (SEZs) are designated areas within a country that operate under different business and trade laws than the rest of the nation. These zones typically offer incentives like tax breaks and simplified regulations. The Catawba Indian Nation established their own SEZ called the Catawba Digital Economic Zone, with the goal of driving economic development, attracting businesses, and creating opportunities for tribal members.

The Catawba Digital Economic Zone is using the Web5 SDK to grant VCs to members. These VCs allow users to prove their identity and achieve regulatory compliance within the zone.

To learn more about this use case, check out this discussion between the Catawba Digital Economic Zone Team and the Developer Relations Team at TBD:

Learn More

Verifiable Credentials are making a tangible difference by solving real problems for real people - simplifying loan applications, protecting artists' rights, ensuring pharmaceutical safety, and supporting tribal sovereignty.

If you have ideas for building apps with VCs, check out these resources:

Build your own Verifiable Credentials with the Web5 SDK How TBD is using VCs in the tbDEX SDK TBD’s YouTube Channel

Holochain

Holochain 0.3, a new Launcher, and… HC on Mobile!

Dev Pulse 140

Holochain 0.3.1 is the newest recommended release for you to build your hApps on. It comes with a raft of performance improvements and bug fixes, and not too many breaking changes. The happy news with this release is that validation is considerably more performant, hitting fewer dependency deadlocks. (There’s other big news that developers need to know about — read below.)

This release also comes with a companion Launcher, which returns to Electron for the UI. This should help front-end devs have a more predictable development and debugging experience.

In the ecosystem, our friends at darksoil studio have come out with p2p Shipyard, a super powerful tool for building self-contained installables of your hApp for Windows, macOS, Linux, and Android (and iOS in the future)!

And finally, you can see what others are doing with Holochain on mobile; at the very end I’ll share a demo video of Relay, an Android messaging app that’ll ship on Volla Phone’s upcoming Quintus flagship smartphone.

Holochain 0.3.1: Validation performance, isolated and authenticated app interfaces, HDI/HDK changes

Release date: 11 June 2024
HDI compatibility: 0.4.x
HDK compatibility: 0.3.x
JavaScript client compatibility: 0.17.x
Rust client compatibility: 0.5.x
Tryorama compatibility: 0.16.x
Scaffolding tool: 0.3000.1
Launcher: 0.300.1

You should notice a decent improvement in the time it takes for published data to appear in the DHT, especially data that has lots of validation dependencies. That’s because the validation workflow has been rewritten. What’s changed? There’s now a single validation thread per DHT space, rather than multiple threads, which means that the validation queue can more intelligently prefetch dependencies and avoid deadlocks due to missing dependencies. There are other tweaks to validation as well, such as reducing the op validation retry timeout and making it adapt to the number of dependencies the op is waiting for.

There are some breaking changes to the P2P protocol and the database structure, meaning that peers using 0.3 won’t be able to communicate with peers using 0.2 or import an 0.2-based application’s database.

But that’s okay, because there are some breaking changes to the HDI and HDK anyway. There aren’t too many, and they should be easy to update your code for. Our core team has written a great 0.2 → 0.3 upgrade guide to help you out.

Lastly, there’s a big change to the application API: There is now only one app API WebSocket port, and each UI must pre-request an authentication token and supply it when it establishes a session. The UI will only have access to the one app it requested access for.

Fortunately most of this change has been abstracted away for you by the new releases of the JavaScript and Rust clients, and if your UI is deployed in the Launcher it’ll get an access token behind the scenes. (In fact, connecting to an app interface requires fewer parameters now — see the upgrade guide for the changes you need to make.) But if you’re writing your own client, you’ll need to get familiar with what these changes mean for establishing an authenticated connection.

If you want to read the changes in detail, read the changelog all the way back to the 20230503.003735 release.

Get it via Holonix or upgrade your existing app.

JavaScript client 0.17.0 and 0.17.1, Tryorama 0.16.0, Rust client 0.5: Updates for Holochain 0.3

The header says it all! This line of releases makes the client libraries and testing framework compatible with the preauthentication and per-app binding of the app API’s new security model. There’s also a small bugfix in the JS client to address a bug in the hash utility functions.

Check out the changelog for the Rust client for details on what’s changed — the process of preauthenticating a client connection is a little more manual than for the JavaScript client.

You can get the JS client and Tryorama from NPM and the Rust client from crates.io by updating the dependency in your package.json and Cargo.toml files, then following the upgrade instructions for your UI code.

hc-scaffold 0.3000.1: Streamlined flow, React and headless templates, simplified CSS

This release contains more than just an update to Holochain 0.3. A lot of polish has gone into this release, and as a sometime code reviewer I feel like this tool is coming into maturity. The developer UX has been tweaked in a lot of little ways that make it a pleasure to work with — particularly the newly streamlined “I only want a single-zome web app” flow. Now, when you run hc scaffold web-app you’re asked if you want to create a DNA and an integrity/coordinator zome pair.

The templates, which traditionally used Material UI to make it look nice, now use a single, clean, basic stylesheet build with Tailwind CSS. The intention is that this will be easier to work with, and easier to just delete and replace with your own stylesheet.

Alternative package managers are now supported via a new -p or --package-manager flag — in addition to npm , you can also scaffold a project that uses pnpm, yarn, or bun to run all of the building, testing, and UI tasks in its package.json files.

The codebase has also been cleaned up a lot. What does this mean for you? Well, if you’d like to try creating your own template, you don’t have to do everything from scratch or copy any non-UI templates — the common templates (like Tryorama tests) are now shared among UI templates.

This scaffolding tool is now available in the Holonix dev environment, so all you need to do to use it in a new project is go

nix run github:/holochain/holochain#hc-scaffold – web-app

(once you’ve installed Holonix, of course).

I’m still updating the Getting Started guide, so it’s a bit out of sync with the current state of the scaffolding tool. But honestly, it’s so easy to use that you probably won’t need my guide anyway.

Launcher 0.300.0: It’s smaller on the outside

When my colleagues showed me the latest Holochain Launcher, I was kinda startled. The Launcher I was familiar with was gone, replaced by… less. Much less. And I rather liked it!

You can tell that a lot of user experience work has gone into this Launcher. The startup UI consists of a tray icon and a little search box, similar to Spotlight on macOS.

That’s it!

I don’t have any apps installed right now, so I’ll try looking for something.

Not much there yet, but Kando looks useful, and I see a nice green ‘Verified’ badge (note: this helps protect you from accidentally installing a malicious HoloFuel lookalike that steals all your fuel — the Holochain team will be vetting all apps for now, but you can still sideload ‘unverified’ apps manually or find them in the hApp store, and there will eventually be a more scalable verification process).

The downloading spinner is simpler and easier to understand. This’ll matter less as Holochain gets faster, but at any stage of maturity it’s nice feedback.

The installation dialogue is easier to understand too. The network seed field has been renamed with less jargon, and a handy tooltip tells you what it does.

Nice, now I’ve got a kanban app installed!

I know from talking to my colleagues that there’s a lot more planned to make Launcher even nicer to use. Pretty excited about the direction things are heading.

And everything — App Store searches, app installation, and the KanDo app itself — feels a little snappier. It’s hard to measure this sort of thing as an end-user, but it feels like Holochain 0.3 is just faster and lighter on resources.

If you’re a developer, you can check that gear icon in the upper right corner, go into ‘System settings’, and install the developer tools. This lets you help host the DevHub package repository that powers the App Store, and you can also add a new hApp to the repository with only a couple clicks. (Note that we’re also working on a DevHub CLI that gives you more power over things like sharing reusable components.)

And one more thing for developers — this version of Launcher reverts back to Electron for all the UI stuff. You may remember that we replaced Electron with a similar library Tauri a year or two ago, and while it seemed to hold promise, the reality wasn’t as great as expected. The biggest issue was the webview — Tauri uses your OS’ system webview, which might be ancient and out of date with current web standards. This was causing a lot of debugging headaches for devs, because they didn’t know which browser their UI needed to target. Electron comes with a recent chromium binary, which means there’s a stable front-end target for you to work with.

Get Launcher 0.300.0 from GitHub, give it a spin, and let us know what you think!

If you’re on Ubuntu 24.04, take note: this version of Ubuntu made some annoying changes to protect you from malicious apps. You’ll need to follow our recommended steps to get the Launcher to run. (The binary you want to target is /opt/Holochain Launcher (0.3)/holochain-launcher-0.3 for the deb package and <path-to-your-app-image>/holochain-launcher-0.3-0.300.0.AppImage for the AppImage package. In the future we’ll ship an AppArmor profile in the deb package to fix this problem.

p2p Shipyard: build your hApp for mobile and desktop OSes

darksoil studio is a small dev shop building core infrastructure for Holocain and “simple peer to peer apps for groups of people to meet their non-digital needs”. And, in order for those apps to meet people’s needs, they have to be available in ways that work for them. Nowadays, that often means an app that you can download from an app store. And mobile is a must.

So darksoil studio set out to make Holochain ready for mobile. It required a huge amount of work, but now, by bridging Holochain and Tauri, they’re able to bundle Holochain, a hApp, and a UI into an Android APK that can be submitted to the Play Store or f-droid (and EXEs for Windows, DMGs for macOS, and AppImages for Linux too).

This is pretty huge. We’ve long recognised it was necessary to get Holochain working on mobile, but needed to focus our energy on getting Holochain itself (and Holo Hosting) ready for general use. So it’s fantastic that a dev team in the ecosystem has done the work to build the critical infrastructure.

Now you can build your hApp with the tool they created, p2p Shipyard, and start testing it on (almost) all the OSes. It’s not Open-Source — yet — but it is Source-Available, so anyone can audit the code for security. Towards patterns for sustainable Open-Source development, darksoil is running an experiment with something they’re calling “retroactive crowdfunding”. Here’s what they’re promising and asking:

The p2p Shipyard is currently Source-Available, and once the retroactive crowdfunding goal is reached, it will be Free and Open Source, Forever.

Until then, a license is needed to use it, with all license fees going towards the crowdfunding goal. If you’d like a license to use the p2p Shipyard for your project, get in touch with us here! 

Here’s what’s been done to make Holochain ready for mobile, if you’re curious:

Mobile nodes can be set to have ‘zero-width arcs’ — that is, they’re full DHT peers but don’t contribute to the storage of other peers’ data. (This means you’ll want to set up something to keep the DHT available — if your userbase is big enough, the number of desktop users might be sufficient, but otherwise you may want to establish a cultural practice of asking people to leave their computer running so their peers can still get data. darksoil is also working on a local-server solution and, of course, Holo hosting will also be an option soon.) p2p Shipyard uses Tauri, which is currently the most viable way to integrate Holochain and a UI into a native Android or iOS app. (You can’t do it with Electron.) The darksoil team have built it as a Tauri plugin which you can fit into your build pipeline. There’s been work to make wasmer, the WASM VM that Holochain uses, ready for mobile. Holo itself has contributed some funds to the wasmer project to get it working in iOS, which will make it possible to build iOS apps using p2p Shipyard in the future. The darksoil team have done a lot of work to get Holochain building for Android and iOS — experimenting, wrestling build systems, equipping Holonix to do the right thing, reporting bugs, fixing them, and of course creating p2p Shipyard as a way to reproducibly build binaries for these OSes.

Here’s a couple screenshots of my colleague Eric using p2p Shipyard to bundle up two hApps his spinoff dev shop has been building.

And here are two more (kando & emergence)!... Check out https://t.co/GeVWNgqrE7 tooling for building and deploying decentralized Holochain based applications on both desktop AND mobile. #holochain pic.twitter.com/BJhgBA4CW5

— Eric Harris-Braun (@zippy314) June 6, 2024

And speaking of mobile…

Holochain on the Volla Phone Quintus

You’ve probably seen this already, but here’s a demo of a hApp running on a smartphone from Volla, a small German manufacturer who are passionate about serving their customers — as in, the people who buy their phones, not the ad companies peering through the airwaves at them. This means privacy-respecting alternatives to what we’re stuck with now.

The app isn’t anything ground-breaking — it’s just a chat app — but what’s amazing is that it’s running fully peer-to-peer (Volla opted to have the phone owners be active DHT participants, hosting each other’s data rather than using the zero-width arc strategy). It’s going to be accompanied by a backup app, and probably more apps in the future.

And it was made possible thanks to the wonderful community Holochain finds itself in – particularly Hedayat Abedijoo and Nick Stebbings who instigated a relationship with Volla, and darksoil who produced p2p Shipyard!

Cover photo by Bernd 📷 Dittrich on Unsplash 


Anonym

Gartner Confirms Anonyome Labs’ Solutions Offer Competitive Edge

The 2024 Gartner Emerging Tech Impact Radar confirms that Anonyome Labs’ enterprise solutions are among the highest impact technologies for gaining a competitive business advantage.  The latest annual impact radar nominates privacy and transparency as one of four emerging tech themes with the most potential to disrupt a broad cross-section of markets. Anonyome Labs’ market […] The post Gart

The 2024 Gartner Emerging Tech Impact Radar confirms that Anonyome Labs’ enterprise solutions are among the highest impact technologies for gaining a competitive business advantage. 

The latest annual impact radar nominates privacy and transparency as one of four emerging tech themes with the most potential to disrupt a broad cross-section of markets. Anonyome Labs’ market leading solutions fit squarely within this theme. 

Gartner then pinpoints decentralized identity (DI) and privacy-enhancing technologies (PETs) as two emerging technologies within that theme that product leaders should be factoring into their strategic and investment planning. Anonyome Labs’ offers both DI and PETs solutions through its B2B solutions. 

According to Gartner, “Increasing digitization of assets, information and experiences and the usage of AI are making privacy and transparency issues increasingly important by increasing opportunities for bad agents to mimic, disrupt and intercept our activities. They are also intensifying concerns around negative consequences of AI tools and techniques.  

“Rapid innovation in critical enabling technologies, like Web3, scalable vector databases and neuromorphic computing are creating new possibilities for IT solutions.” 

Through that lens, Gartner recommends, “Stimulat[ing] growth while mitigating risk and restrictive regulation by building user trust via systems such as decentralized identity and behavioral analytics and applying human-centered AI and responsible AI principles … [and] support[ing] your strategic product roadmap by identifying relevant emerging technologies and business values that they can enable and identifying relevant innovation tech partners.” 

Gartner describes DI or self-sovereign identity systems as technologies that address privacy and transparency challenges with traditional identity systems, and PETs as robust approaches that allow the processing of information while protecting underlying personal data. See below for further reading. 

Gartner says all 30 emerging technologies and trends are critical for product leaders to evaluate as part of their competitive strategy. And it seems many have already started in the privacy and transparency themed space: More than 62 per cent of US companies plan to incorporate a DI solution into their operations, with 74 per cent likely to do so by June 2024.  

Anonyome Labs is the leader in privacy and digital identity protection technologies. From verifiable credentials to VPNs and encrypted communications, we leverage our cryptography and blockchain technology expertise to take data privacy and security to the next level. Talk to us today to find out how your enterprise can get ahead of the curve on Gartner’s recommendations.  

Learn more about Anonyome Labs’ DI and PETs offerings 

You might like: 7 Benefits to Enterprises from Proactively Adopting Decentralized Identity 

Want more on decentralized identity from Anonyome Labs? 

Can Decentralized Identity Give You Greater Control of Your Online Identity?  Simple Definitions for Complex Terms in Decentralized Identity  17 Industries with Viable Use Cases for Decentralized Identity  Inside the Massive Projected Growth in the Decentralized Identity Market  Why More Companies are Turning to SaaS for Decentralized Identity Solutions  What our Chief Architect said about Decentralized Identity to Delay Happy Hour  6 Ways Web3 and Decentralized Identity Technologies Could Stop Deep Fakes  5 Aha! Moments About Decentralized Identity from the Privacy Files Podcast  Our whitepapers 

Want more on privacy-enhancing technologies from Anonyome Labs? 

Want to Monetize Privacy? Here’s How to Do It, Fast  2 Ways to Give Your Customers Privacy Products, Not Just Privacy Advice  This is How You Go Fast to Market with Privacy and Identity Protection Apps  5 Easy Ways to Become Your Customers’ Go-to for Privacy  2 Big Problems with Passwords – and How You Can Easily Solve Them for Your Customers  How to Use the Sudo Platform to Deliver Customer Privacy Solutions  How the Sudo Digital Identity Can Help Stop the Attack on Personal Privacy  5 Predictions for Data Privacy in 2023 and Beyond  3 Signs the US Public is Taking Data Privacy in its Own Hands 

Check out our podcast, Privacy Files, to hear what your peers and experts are saying about the state of member and consumer privacy in real time. 

The post Gartner Confirms Anonyome Labs’ Solutions Offer Competitive Edge appeared first on Anonyome Labs.


1Kosmos BlockID

Vlog: Why 1Kosmos and Microsoft Entra Are Better Together

Join Robert MacDonald, VP of Product Marketing at 1Kosmos, and Vikram Subramanian, VP of Solutions, as they explore the integration of 1Kosmos with Microsoft Entra, enabling passwordless authentication, hybrid environments, and enhanced security for enterprises. Learn how 1Kosmos bridges gaps in the Microsoft ecosystem, providing a unified, secure experience for users across platforms. Hi, welcome

Join Robert MacDonald, VP of Product Marketing at 1Kosmos, and Vikram Subramanian, VP of Solutions, as they explore the integration of 1Kosmos with Microsoft Entra, enabling passwordless authentication, hybrid environments, and enhanced security for enterprises. Learn how 1Kosmos bridges gaps in the Microsoft ecosystem, providing a unified, secure experience for users across platforms.

Hi, welcome to our latest blog. My name is Rob MacDonald, I’m VP, Product Marketing, here at 1Kosmos, and I’m joined today by Vikram. Hey Vikram, how are you doing? Why don’t you introduce yourself?

Vikram Subramanian:

Hello everyone, I’m vikram Subramanian, Vice President of Solutions at 1Kosmos. Just lead a bunch of mad scientists who put solutions together. So Rob, excited to be here.

Robert MacDonald:

Yeah, have a listen, I’m glad that you could take your time away from holding this place up to come and talk to us today. So Vik, today I want to talk to you a little bit about Microsoft. So I think as we know, as an industry, Microsoft is in about 98% of all Fortune 2000 organizations, they’re everywhere. And traditionally, they’ve been on-prem, over the last number of years they’ve been moving into the cloud with Azure, which has now been rebranded to Entra, or Entra, and Entra ID. So in that transition, it was difficult to connect into the Microsoft stack. But they’ve recently made a couple of changes that now enables organizations to become an authentication method into these platforms. So why don’t you tell us a little bit about what’s going on in the Microsoft world and what that means to us as an industry?

Vikram Subramanian:

Absolutely. So, Entra and Microsoft have really become partner-friendly now. I think a lot of our clients have been requesting Microsoft to do this for quite some time. The primary use case has always been that a lot of our clients want their single sign-on solution to be Microsoft Entra. With the movement of Active Directory from on-premis to Entra, then at that point in time, they want all of their users to come in, go and jump off into other applications through Entra.

However, we as 1Kosmos have always been advocating for passwordless authentication. Users every day, yes, passwordless is great, but you can’t move all users with a snap of a finger. So what do you do? You have to slowly start migrating them. This was not possible earlier. The only MFA factors that were possible to introduce to the end user was a select set of vendors, Microsoft themselves being one of them, and it was a difficult proposition for our clients to combine the usage of Entra as well as 1Kosmos.

So now, with external authentication methods, what they have done is they have allowed for external vendors to come in and offer up their MFA solutions as one of the options and really provide Entra administrators an easy way to configure this into their platform. And it’s all standards-based, so which means, now end users can do the first factor authentication in Entra, do the second factor in 1Kosmos, and off they go to the applications that they want to go ahead and authenticate into.

Secondly, the other big thing that has come out, of course, that is very interesting to me, so if you have a Windows 11 machine and a certain subscription of Entra, that point in time, what Microsoft has enabled is the web sign-in method. With 1Kosmos supporting standard federation protocols like OIDC as well as SAML, what you have the capability of doing, is now doing QR code based sign-in natively in Microsoft without installing any agent on the endpoint. So it’s agentless, passwordless authentication that 1Kosmos can offer through our integration with Entra.

Robert MacDonald:

Well, that’s pretty exciting. So, that’s a substantial shift in what we’ve seen over the last number of years, specifically with Microsoft, and then even within our customers themselves. So Vik, what happens if an organization has Entra and an on-prem AD? Do those two things work together easily? Is there one authentication method from Microsoft that organizations can use to leverage that, or is there a different way that organizations have to go about doing it? And does this help them in that hybrid type environment, that maybe some organizations are in, as they move things to the cloud?

Vikram Subramanian:

Correct. Yeah, I think the biggest usage of 1Kosmos is going to be for organizations that are stuck in the middle now. I mean, they have regular on-prem AD joined machines, they have a hybrid Azure AD or hybrid Entra AD joined machines, as well as pure Entra joined machines. So the combination of this environment presents a number of challenges for them to offer a unified experience to the end users. Don’t get me wrong, experiences can be offered individually, and just by retaining all the methods that Microsoft can offer. But if you want to perform or provide a unified experience across your entire user population, that’s where 1Kosmos comes in. And we provide variety of methods of integrating with Entra.

So primary method is, we can become the IDP within the organization, get all your applications, retain your investment in Entra, get all your applications embedded within Entra and integrated with them. And now, our latest update, we have support for the WS-Fed protocol, which is a legacy protocol that Entra requires, and we are able to support that, which means that now you can run conditional access within Entra, but really, offload the authentication to 1Kosmos, where the user can do passwordless and password and OTP-based authentication through our 14 different factors that we offer.

Then along with that, what we also support is on-prem Active Directory joined machines, where our agent can be installed and users can do passwordless. For hybrid Azure joined or pure Entra ID joined machines, you have the web sign-in method and the users can utilize the same QR code. So unified experience, log into the workstation and the same experience can happen on the web also. This means what the end user has is a single experience, single place to go to do everything and a single authenticator to use for everything, which means less help desk tickets, right?

Robert MacDonald:

Yeah, absolutely. So let’s talk a little bit about the everything. So when you look at enterprises, we know that they’ve got, obviously Windows machines, I’m sure they’ve got Mac, Linux, they’ve got a variety of different, maybe VPN, they may have other things that maybe don’t have Microsoft in front of it, in terms of what a product is. So with what Microsoft has done, does that now enable Microsoft to authenticate more easily into them, or is it still the Microsoft’s capabilities are still fit for the Microsoft products, but anything outside of that is still a bit of a problem so you still need that standardization in terms of experience and capabilities to fulfill, maybe some of those gaps that Microsoft might introduce?

Vikram Subramanian:

I’m very sure that folks are going to read the latest blog or the white paper that you’re going to put out, Rob, on this. But the core of it is, really, I think where 1Kosmos fits in, is everything within the Microsoft ecosystem utilize Microsoft. And I think, integrate your applications with Microsoft, we’re not going to challenge that. And the idea would be that to authenticate into the ecosystem of Microsoft, you can utilize 1Kosmos. Why? Well, there are things that you cannot do with Entra, that you cannot do with any other single sign-on solution out there, which is to integrate with legacy applications such as Radius applications, or VPNs, or Linux systems, Mac systems. What are you going to do for all of those? And are you going to provide a different or a completely separate user experience to all of those guys?

I think that’s where enterprises need to weigh the pros and cons and really, I think everyone has erred on the side of, “Hey, I want a unified experience.” If you want that, definitely 1Kosmos is the answer, while you are able to leverage the investments that you’ve already made in Entra. So Intune, the conditional access policies, anything that you have integrated in terms of APIs. All of those things can happen, but you’re not restricted to utilizing only Microsoft authenticators.

Robert MacDonald:

Fair enough. Now, looking at the Microsoft environment, are users still required to start that engagement from the moment they open up their laptop at the very first point in time with a password? Is the starting spot still with Microsoft, using your password? If yes, what about resetting those passwords? Because obviously that’s going to be problematic down the road. Maybe what could something like that look like going forward, Vik?

Vikram Subramanian:

That’s a great segue into our proofing capability. I think everyone knows it begins with the password in the Microsoft ecosystem. And even in order to set a PIN or in order to set up your Windows Hello, you are going to have to enter the password the first time around. So the password seems to be the entry point in the beginning. So what are you going to do to reset the password? What are you going to do to maintain the password? The easiest way to do it, is with the 1Kosmos app, where you don’t answer any KBAs, rather authenticate with biometrics and we can reset that.

The secondly, the big thing that has come out of course, is the Scattered Spider attacks, which I think you spoke about in the last IBA, or a couple of IBAs ago, was how do we prevent the attacks such as the ones that have been launched by Scattered Spider? So we are able to do identity proofing in a matter of 10, 15 seconds for the end user, same time that you would spend on a call, and then go ahead and provide the service desk the capability to really know who’s behind the phone call. And once that’s done, they’re able to reset the password.

Robert MacDonald:

And looking at the Scattered Spider, and for those of you that, or those that maybe are watching this, that have not seen those previous ones, how do you go about verifying the identity to do that? And does that add value to what Microsoft’s offering, so when you integrate us in with Microsoft? What’s the value that that could bring to a Microsoft shop, essentially?

Vikram Subramanian:

The reason Scattered Spider has been in the news so much, is because they were able to socially engineer the service desk, and just by answering a few questions that’s available online, and then getting access to some privileged accounts. And after that, of course, they wreaked havoc. So truly what we are still answering is, from an authentication standpoint, who’s really behind the phone call? And are you who you say you are? And one of the ways that we can prove that in the physical world is, of course, taking a physical document, government-issued ID and then proving ourselves.

But it was not possible in the digital world, and now what we have enabled is through our identity proofing capabilities, we can truly provide the information to the help desk as to who’s the person who’s calling in and how have they verified themselves by, say for instance, taking their driver’s license. We scan the front, we scan the back, then we take a selfie, match the selfie against the document, match the information that’s there from the back of the document to the front of the document, and then really, we’re able to truly verify that the person is who they say they are. And if you need additional verification, we can go back to the issuing agencies and then get that information.

Robert MacDonald:

Wow, okay, so that’s pretty powerful and certainly an elegant way to help support the investment that organizations have already made into the Microsoft environment. Vik, last question, I know that things like remote desktops and virtual machines, and things along those lines, have always been relatively tricky to secure. Can you talk a little bit about maybe the way in which any of these changes from Microsoft may support that from a Microsoft perspective? And then maybe even talk about ways in which we could help organizations solve that as well, assuming we can?

Vikram Subramanian:

So, these changes themselves, potentially could allow an end user to log into their workstation. However, remote desktops or anything to do with something that is not your workstation, those use cases are not going to necessarily benefit through the changes that Microsoft has made. However, 1Kosmos does have a solution for that. We offer MFA, we offer passwordless login into remote desktops, or really, if organizations want to protect holistically all of the use cases that they want to do with Microsoft, we have MFA for that. But I would say from what the changes that I have seen, are really geared towards the end user and not a power user. The end user who’s trying to log into their workstation and really get on with their day and log into machines and log into some websites and web applications, for them, these changes are really amazing.

And with integration with 1Kosmos where we are bringing in the MFA, where people could literally just log in with their face, or log in with their fingerprint, those are all things that we can definitely bring into the picture, and now offer it up to even Entra ID customers, who are pure Entra ID joined.

Robert MacDonald:

Oh, that’s very exciting. Listen, Vik, I appreciate you coming by today and talking about how organizations that are modernizing their IAM infrastructure with Microsoft Entra. Obviously it’s a big step forward for a lot of these organizations to help secure logins, prevent fraud. But as we know, there are gaps and areas of improvement and all those things that 1Kosmos can certainly help with. So I appreciate you taking the time today, coming in and filling us in on some of those changes and how we can help organizations going forward. Appreciate it, Vik.

Vikram Subramanian:

Thank you for having me over, Rob.

Robert MacDonald:

We’ll see you again.

Vikram Subramanian:

Thanks guys.

The post Vlog: Why 1Kosmos and Microsoft Entra Are Better Together appeared first on 1Kosmos.


Civic

Tokenized Identity: Quadratic Voting As A Public Good with Dean Pappas, Dean’s List

In this episode of Tokenized Identity, Titus Capilnean, our VP of Go-To-Market, speaks with Dean Pappas of Dean’s List. They explore everything from the state of DAOs, quadratic voting, $POLL and beyond. They discuss cultivating better communities in Web3. If you’re not familiar with Dean, he’s been instrumental in the development of DAOs on Solana […] The post Tokenized Identity: Quadratic Voti

In this episode of Tokenized Identity, Titus Capilnean, our VP of Go-To-Market, speaks with Dean Pappas of Dean’s List. They explore everything from the state of DAOs, quadratic voting, $POLL and beyond. They discuss cultivating better communities in Web3. If you’re not familiar with Dean, he’s been instrumental in the development of DAOs on Solana […]

The post Tokenized Identity: Quadratic Voting As A Public Good with Dean Pappas, Dean’s List appeared first on Civic Technologies, Inc..


UNISOT

ENSURING NON-TOXIC TAMPONS AND SAFE SUPPLY CHAINS

Recent studies have uncovered alarming levels of toxic metals such as arsenic and lead in tampons. UNISOT tools are designed to address this challenge by automating and securing the process of mapping a company’s supplier network across multiple tiers upstream. The post ENSURING NON-TOXIC TAMPONS AND SAFE SUPPLY CHAINS appeared first on UNISOT.

Recent studies have uncovered alarming levels of toxic metals such as arsenic and lead in tampons. Notably, lead concentrations were higher in non-organic tampons, while arsenic levels were higher in organic tampons. This raises significant health concerns given the high absorption potential of the vaginal mucosa​ (Berkeley Public Health)​​ (University of California)​.

Jenni A. Shearston, a postdoctoral scholar at the UC Berkeley School of Public Health, highlighted the importance of testing: “I really hope that manufacturers are required to test their products for metals, especially for toxic metals. It would be exciting to see the public call for this, or to ask for better labeling on tampons and other menstrual products”​ (Berkeley Public Health)​​ (University of California)​.

How UNISOT’s Technology Ensures Safety

UNISOT’s Asset Traceability Platform and Digital Product Passports provide essential tools for ensuring the safety of menstrual products. By leveraging blockchain technology, these tools offer an immutable record of the entire supply chain, from raw material sourcing to manufacturing processes. This transparency allows consumers to verify the safety and quality of the products they use, ensuring they are free from harmful toxins.

Addressing Arsenic in Organic Tampons

A key finding from the study was that the arsenic found in organic tampons can increase the risk of cancer, reproductive and developmental health, cardiovascular diseases, neurological effects, endocrine disruption, kidney and liver damage. It likely comes from natural fertilizers used in cotton farming. This highlights a significant issue of due diligence within the supply chain. UNISOT’s Supply Chain Due Diligence (SDD) tool is designed to address this challenge by automating and securing the process of mapping a company’s supplier network across multiple tiers upstream.

SDD protects supplier trading secrets by anonymizing company names within the supply chain network. This ensures that sensitive commercial information remains confidential while still allowing for a comprehensive mapping and analysis of the supply chain. This level of transparency and protection is crucial for identifying and mitigating sources of contamination, such as arsenic from natural fertilizers, and ensuring that only safe products reach consumers.

Addressing Lead in Non-Organic Tampons

The discovery of higher lead concentrations in non-organic tampons is particularly concerning due to the severe health risks associated with lead exposure, including neurological damage, reproductive issues and increased cancer risk. Lead can enter the cotton used in tampons through various environmental pathways, such as contaminated soil, water or air, often near industrial sites or due to the use of certain pesticides.

UNISOT’s Asset Traceability Platform can help address this issue by providing detailed insights into every step of the supply chain. By tracking the origins of raw materials and monitoring manufacturing processes, the platform can identify and mitigate sources of lead contamination. This includes ensuring that cotton fields are in safe, uncontaminated environments and that any chemicals used in the cultivation or processing of cotton are free from harmful substances.

Moreover, by implementing UNISOT’s Supply Chain Due Diligence, companies can continuously monitor and verify their suppliers’ compliance with safety standards. This proactive approach not only helps in identifying potential contamination sources but also ensures that suppliers adhere to stringent safety protocols, thereby reducing the risk of toxic contamination in tampons, guaranteeing that all menstrual products are safe for women to use.

Women deserve to have access to products that are completely free from harmful toxins, ensuring their health and well-being.

Sources:
https://news.sky.com/story/arsenic-lead-and-other-toxic-metals-found-in-tampons-study-says-13175436
https://www.cbsnews.com/news/toxic-metals-tampons-arsenic-lead/
https://www.npr.org/2024/07/11/nx-s1-5036484/tampons-heavy-metals-study

The post ENSURING NON-TOXIC TAMPONS AND SAFE SUPPLY CHAINS appeared first on UNISOT.


Elliptic

Crypto regulatory affairs: Hong Kong publishes stablecoin consultation response and announces sandbox participants

Financial sector watchdogs in Hong Kong have taken an important step to pave the way for stablecoin regulation, and to bolster Hong Kong’s reputation as a leading hub for crypto development in the Asia-Pacific (APAC) region.

Financial sector watchdogs in Hong Kong have taken an important step to pave the way for stablecoin regulation, and to bolster Hong Kong’s reputation as a leading hub for crypto development in the Asia-Pacific (APAC) region.


KuppingerCole

Sep 19, 2024: IGA als Herzstück eines jeden Security-Transformations-Programms

In der heutigen digitalen Landschaft stehen Unternehmen vor wachsenden Herausforderungen im Bereich der Cybersicherheit. Angriffe auf digitale Identitäten nehmen zu und sind oft erfolgreich, wie jüngste Vorfälle zeigen. Gleichzeitig ist die digitale Identität eine Schlüsselkomponente für Zero-Trust-Architekturen, die den kontrollierten Zugriff auf Unternehmensdaten ermöglichen.
In der heutigen digitalen Landschaft stehen Unternehmen vor wachsenden Herausforderungen im Bereich der Cybersicherheit. Angriffe auf digitale Identitäten nehmen zu und sind oft erfolgreich, wie jüngste Vorfälle zeigen. Gleichzeitig ist die digitale Identität eine Schlüsselkomponente für Zero-Trust-Architekturen, die den kontrollierten Zugriff auf Unternehmensdaten ermöglichen.

Tuesday, 23. July 2024

Lockstep

In praise of metadata

The term metadata has become rather loaded. Perhaps even poisoned, for its association with telecommunications surveillance. But I want to sing its praises for it is metadata that tells us if any given information is accurate or reliable, Or trustworthy, fit for purpose, valuable. National security hawks advocating stronger surveillance powers, have tried to whitewash... The post In praise of me

The term metadata has become rather loaded. Perhaps even poisoned, for its association with telecommunications surveillance. But I want to sing its praises for it is metadata that tells us if any given information is accurate or reliable, Or trustworthy, fit for purpose, valuable.

National security hawks advocating stronger surveillance powers, have tried to whitewash metadata collection. They liken telecomm metadata to the visible details on an ordinary envelope and insist it’s innocuous compared with the contents of the message.

On the other hand, U.S. General Michael Hayden, former head of the National Security Agency, once stated plainly and simply “we kill people based on metadata” (although he denied that any telecommunications metadata collected on regular citizens was used for that).

Data and metadata: same but different

From a privacy perspective, metadata should certainly not be distinguished from data in general.

As I understand prevailing principles-based privacy law, if a piece of metadata is personally identifiable, then it constitutes personal data and falls within the scope of such law.

So from one perspective, metadata is merely more data. Nevertheless, I find it useful to distinguish data and metadata, because the properties of data that make it valuable or reliable (or unreliable) are often codified in metadata.

For example:

the age of a cell phone number or email address can suggest it may be a burner account being used in a fraud data presented online for identification purposes or in card-not-present payments really needs to be “original” in the sense that it’s presented by the rightful subject watermarks generated within digital camera hardware can prove that an image is genuine, rather than AI-generated clinical trial results should be based on patient data collected under proper consent conditions.

We might argue that the value of any data lies in the metadata.

Verifiable credentials are really about metadata

This distinction between data and metadata also illuminates how verifiable credentials convey rich data quality signals.

Verifiable credentials are tools that convey machine readable assertions made by a third party about a subject — which is usually a human but verifiable credentials for non-human subjects such as IoT devices are expanding fast.

The important elements of verifiable credentials (and verifiable presentations) are:

they name (or point to) the subject of the credential they name the issuer of the credential they bear the digital signature of the issuer — which gives the credential provenance the presentation bears the signature of the subject (ideally generated within in a secure wallet) — which indicates their consent or control the credential carries a range of administrative metadata, such as validity date, applicable terms & conditions, and details of the device carrying the credential.

The digital signature on a verifiable presentation is typically created automatically in a wallet or chip. The signing process uses a private key embedded in the firmware, unique to the subject, but not visible to them.

The issuer of a credential is one of the most important factors used to determine whether to accept that credential or not. That is, the issuer confers value; some issuers are valued more than others.

So the name of the issuer of the credential is metadata of the credential: it is something that a first party wants to know about a second party before deciding to do a transaction.

About the credit cards in the banner graphic

The banner at the top of this blog shows some favourite images from my archive, from the very first charge card in 1955 through a series of technological innovations. First, the printed cardholder details were coded on a magnetic stripe for automated reading; tougher plastic cards supported antifraud measures like holograms and guilloche printing; the magnetic stripe was superseded by smart chips that prevent copying; and smartcards gave way to smart phones with biometrics to further protect the cardholder.

This evolution is really all about metadata!

Cards and phones, as far as the card payment system is concerned, are just data carriers. They store details about the card holder (in particular the Primary Account Number or PAN) and facilitate the presentation of that data to a merchant. The move from mag stripe to chip was the most important security measure in sixty-odd years; the chip provides signals about the originality of the PAN and the consent of the cardholder to each presentation.

Every major upgrade of credit card technology has improved the metadata that protects the primary data. All along, over several decades, the primary data has remained the same. But it has got better, thanks to metadata,

I analysed the evolution of cards in more detail here: A CMM for personal data carriers and digital wallets.

“Attributes” and “Claims”

The late great Kim Cameron — author of the prized Laws of Identity — carefully used the term “claim” in defining digital identity. The words claim, attribute and assertion might seem interchangeable but Kim singled out claim as “an assertion of the truth of something, typically one which is disputed or in doubt”.

He stressed that there is always uncertainty in the real world, and when authenticating another party, there is always going to be doubt over important attributes.

The trick is to reduce that doubt to acceptable levels. A relying party will always reserve the right to decide for itself if its doubts have been resolved.

Verifiable credentials technology provides multiple mechanisms for doing just that.

For one thing, a verifiable credential bears the name of the credential issuer. In many cases, there is a natural issuer of a credential of interest: driver licences are issued by government departments of motor vehicles, employee numbers are issued by employers, credit card numbers are issued by banks. When verifying a credential, one of the most important things to check is the issuer.

The familiarity of metadata in real life

This coupling of data and metadata is routine in the analogue world.

In courtroom dramas, stories turn on facts and evidence.  The facts tendered in a court case are only as good as the evidence. There are rules of evidence governing how information is obtained and safeguarded.

Facts and evidence in court procedures correspond to claims and proofs in digital identity. It’s all data and metadata.

How do you know?

In science, it’s not just what you know that matters but how do you know. What is the source of a statement or claim? Where is the evidence?

Children know this instinctively. As they develop a sense of how knowledge and trust are fluid, plucky kids will challenge the things they are told, with the riposte “How do YOU know?”.

Metadata and the stories behind the data.

Metadata can tell the story behind the data, a story that is increasingly important in all things digital.

As data supply chains become ever more complicated, we need enhanced abilities to interrogate the information we receive and depend on — whether that’s a news report, an photographic image, a student’ essay, a CV, the results of an automobile’s emissions test, or a scientific report on climate change.

Where did a given piece data come from? Who and/or what contributed to it?

The products of generative AI are starting to be watermarked, but that’s only a start. It will be important to know more, like which algorithms and version numbers were used, where did the models run, how were they trained, and was the training data audited?

Looking at signatures as metadata

With this orientation, everywhere I look now, I see metadata!

For example, a less obvious example of metadata is digital signatures.

A digital signature is a data value (technically a “cryptogram”) usually calculated by hashing and/or encrypting a record using a private key controlled by some actor. Note that I’m referring here to asymmetric or public key digital signatures.

The signature on a record can be checked at any future time to verify that a particular actor had something to do with that record, such as creating it or agreeing to it.

There are many different applications for digital signatures — but they are all used to create evidence that a given record at a certain time was touched in some way by a certain actor. That is, the signature tells a story about the history of a record. The digital signature is more metadata.

The post In praise of metadata appeared first on Lockstep.


Finicity

CFPB 1033 and Open Banking: Opportunities and Challenges for Banks

In this webinar from April 24, 2024 Tom Carpenter, Senior Vice President of Industry, Policy and Standards Engagement from Mastercard, along with panelists from Sidley and i2c, discussed the potential… The post CFPB 1033 and Open Banking: Opportunities and Challenges for Banks appeared first on Finicity.

In this webinar from April 24, 2024 Tom Carpenter, Senior Vice President of Industry, Policy and Standards Engagement from Mastercard, along with panelists from Sidley and i2c, discussed the potential of CFPB Section 1033 for open banking initiatives with banks. 

They discussed how banks can leverage data sharing to enhance customer experiences, create new product offerings and navigate competition from fintechs. You will also learn about the opportunities represented by the rule and how to develop strategies to capitalize on the evolving landscape. 

Find out why the CFPB Section 1033 rule is crucial in advancing open banking, any potential risks associated with the rulemaking, best practices for compliance with Section 1033 and new opportunities to leverage data sharing to innovate and offer new services. 

You can watch the webinar here

The post CFPB 1033 and Open Banking: Opportunities and Challenges for Banks appeared first on Finicity.


Why keep open banking top of mind? CFPB regulation and new opportunities

In this webinar with the Consumer Bankers Association from April 25, 2024, Mastercard’s Ben Soccorsy and Jenny Ziegler tackled the impact open banking regulation will have on banks and the… The post Why keep open banking top of mind? CFPB regulation and new opportunities appeared first on Finicity.

In this webinar with the Consumer Bankers Association from April 25, 2024, Mastercard’s Ben Soccorsy and Jenny Ziegler tackled the impact open banking regulation will have on banks and the opportunity having a regulated ecosystem provides. 

From the basics of open banking data and how consumers can access their data from any of their financial institutions to the way that data can be used for lending, financial management, wealth management, and payments. 

You can learn how the regulatory environment is accelerating the shift towards open banking. CFPB Dodd Frank Section 1033 is intended to break down barriers to accessing financial products, jump-start competition between financial institutions and fintechs and provide consumers more control and access to their financial data. 

Regulation will mandate that data providers must share their financial data with third parties or consumers via APIs safely and securely. The compliance deadline varies depending on the size of the financial institution. 

For banks, this means being on top of API enablement, consent management, information security, third party risk management, risk and compliance, data governance and data monetization strategies. 

You can watch the webinar here

The post Why keep open banking top of mind? CFPB regulation and new opportunities appeared first on Finicity.


KuppingerCole

CrowdStrike’s Cyber Blackout

by Mike Small Some years ago, the book Blackout by Marc Elsberg described the impact on society of malicious software infecting the electricity supply network in Europe. Last week the whole world experienced the consequences of a faulty software update to the security software CrowdStrike Falcon. According to Mr. Kurtz, the CEO of CrowdStrike, this was not a malicious act. However, the impact of

by Mike Small

Some years ago, the book Blackout by Marc Elsberg described the impact on society of malicious software infecting the electricity supply network in Europe. Last week the whole world experienced the consequences of a faulty software update to the security software CrowdStrike Falcon. According to Mr. Kurtz, the CEO of CrowdStrike, this was not a malicious act. However, the impact of this error was considerable.

We Are All Now Dependent on Digital Systems

I can understand the feelings of the CrowdStrike team. When I was VP of development of security software, I remember very clearly how I felt when one night I was called and told that our security software was preventing clinicians in a paediatric hospital from accessing the systems and that unless they regained access within the hour, babies would start to die.

Developing security software places an extra burden on the development teams. This software is intended to protect organisations from malicious actors, however by its very nature that security software can also prevent the systems it is intended to protect from operating.

Over the past several years governments have recognised the increasing dependence of society on IT systems and how this brings the need for greater resilience of these systems. In Europe, this has resulted in legislation that includes NIS2 and DORA. The best control to ensure resilience is diversity. However, in the world of IT, most organisations are heavily dependent on systems that or delivered by a few suppliers. This is especially true for desktop systems, where Microsoft is the dominant supplier, and for cloud services, where AWS, Google, and Microsoft have the lion’s share of the market.

In this instance, the problem was that an update to security software caused the systems that were running it to crash, and the fix required a significant amount of manual intervention on each affected machine. The end users had no control over the deployment of this patch, as the security vendor pushed it out to all endpoints across the world within a very short period of time. The issue affected all CrowdStrike customers running Windows-based systems including PCs, servers, kiosks and other forms of specialist terminals.

This is actually not the first time it has happened – in fact, CrowdStrike had a similar issue with the Linux version of its software just a few months ago. An update incompatible with the latest version of Debian Linux was released, causing servers to crash and refuse to boot. Back then, it took the company weeks to acknowledge the issue and reveal that Debian Linux wasn’t covered by their test procedures, despite being officially supported.

Other cybersecurity vendors, including McAfee, Sophos, and Symantec, had similar issues over the last two decades, although they have never had such a global impact.

What an End User Organisation Must Do

Since this occurred through a defect in security software, the normal advice relating to the use of up-to-date security software is not very helpful. Additionally, these are infrequent but high-impact events which make planning hard. Here are some actions that organisations can take:

Include this in your Business Continuity Plan – Consider this kind of risk as part of your business continuity planning. Remember that as your organisation goes digital, it becomes more dependent upon IT, and cyber risks require special treatment. Cyber incidents spread very rapidly across interconnected components, so moving to another physical location does not help.

Resilience through Diversity – The most powerful control to ensure resilience is diversity, but this is difficult to achieve given the dominance of a small number of major suppliers. Consider this kind of risk in your business continuity planning. Consider whether the trade-off between cost and ease of management against risk of cyber failure due to dependence upon a single IT environment is acceptable for your critical business systems.  For life-critical systems, best practice requires three different software elements provided by three different suppliers to minimize risk. This is impractical for most situations, but you could consider deploying security software from multiple vendors across different parts of your IT estate.

Evaluate Vendor Risk – When choosing security software, include consideration of this kind of risk in your vendor assessment process. Evaluate the kinds of controls that the vendor has to prevent and mitigate this kind of error. These can include the software design and development processes, including testing and deployment. Does the vendor phase the deployment of updates with inbuilt feedback? Does the vendor allow you any control over the deployment of updates, and can updates be selectively deployed to groups of systems? Does it follow the standard-based practices of software supply chain security?

Incident Plan – Have a well-tested incident response plan and include this kind of event in your planning. Include and test how you would manage having to reimage or reboot a large portion of your IT estate. Do you have the tools and skills to manage this? Don’t forget that you need to verify whether you have backed up your data and are able to restore it in time.

Keep Calm and Carry On – Unfortunately, a lot of cybercriminals and even a handful of security vendors have already recognized this massive incident as an opportunity to exploit victims’ insecurity and vulnerable state. We can already observe a massive increase in phishing and other criminal activities focusing on CrowdStrike’s and Microsoft’s products. Some vendors are trying to push their own products as “more resilient” alternatives. However, the best thing you can do now is to avoid making rash decisions. Focus on addressing the immediate consequences of the outage and start looking for neutral expert guidance for adjusting your long-term security strategies, architectures, and portfolios. Focus on methods that can be proven and validated and avoid snake oil at all costs.


Shyft Network

Zero-Knowledge: The Future of More Secure and Scalable Blockchain

Zero-knowledge technology (ZK) enhances blockchain privacy and scalability. Recent innovations include ZK verifiers for Bitcoin and improved ZK protocols. Future ZK developments will focus on speed, usability, and efficiency. Blockchain technology has transformed the world with its inherent transparency, immutability, and security. However, amidst the rising dominance of a few tech companies,
Zero-knowledge technology (ZK) enhances blockchain privacy and scalability. Recent innovations include ZK verifiers for Bitcoin and improved ZK protocols. Future ZK developments will focus on speed, usability, and efficiency.

Blockchain technology has transformed the world with its inherent transparency, immutability, and security. However, amidst the rising dominance of a few tech companies, concerns about increasing centralization and censorship underscore the need for enhanced privacy.

This is where zero-knowledge technology comes into play. It not only offers improved privacy but also provides a better solution to blockchain’s scalability issues.

Zero-knowledge is a broader category of cryptographic methods designed to preserve privacy by allowing one party to prove to another cryptographically that they possess knowledge about a piece of information without revealing the actual underlying details.

In blockchain, zero-knowledge proofs (ZKPs) use algorithms to process data and confirm its truthfulness.

Zero-knowledge proofs focus on three main criteria:

Zero-knowledge: The verifier cannot access the original input but can only confirm the statement’s validity. Soundness: The protocol cannot validate invalid input as true. Completeness: The protocol always validates the statement, provided the input is valid.

So, a basic zero-knowledge proof consists of three components. A witness who provides the secret information; a challenge, in which the verifier selects a question for the prover to answer; and a response, in which the prover answers the question.

There are several types of Zero-Knowledge Proofs (ZKPs):

Interactive ZKPs: These require several exchanges between the prover and the verifier. Non-interactive ZKPs: Once set up, these do not require any further interaction between the prover and the verifier. SNARKs and zk-SNARKs: SNARKs provide brief proofs that can be quickly verified, while zk-SNARKs are a more advanced type of SNARK. ZKPs Taking Center Stage

While the basic concept of cryptographic primitives has existed for the past couple of decades, its development accelerated substantially only after the introduction of Bitcoin and Ethereum due to the technology’s ability to scale blockchains.

By enabling one person to demonstrate to another that a computation was performed correctly without redoing the work or sharing the data used, ZKPs streamline and speed up the verification process.

This efficiency reduces costs and accelerates transactions on blockchains like Bitcoin, as it eliminates the need for every node to re-execute each transaction.

Instead, a single node handles the processing and then uses a ZKP to prove its accuracy, while the other nodes only need to verify this proof. Thus, ZKPs facilitate the development of a financial system that, unlike traditional finance, does not depend on social trust.

With the help of zero-knowledge technology, crypto users can also maintain their anonymity on public blockchains, where all the transaction history is for everyone to see, track, and monitor.

The tech actually allows for private identity verification, allowing for compliance while eliminating the need to reveal the data itself. This way, it even takes the load off the blockchain, offering scalability benefits.

In the crypto world, teams like Polyhedra and Lambda Class are actively exploring this topic. The venture studio and investment firm Lamda Class sees SNARKs having a significant impact on shaping our world. Earlier this year, they proposed a simple and modular bridge that uses multi-storage proofs.

Ethereum co-founder Vitalik Buterin, too, has expressed his support for implementing zero-knowledge (ZK) technology to achieve user privacy, censorship resistance, and autonomy. He believes in the future, all rollups, a solution to improve blockchain scalability, will actually be ZK.

(Image Source)

“zk-SNARKs will be as important as blockchains in the next ten years,” Buterin said last year. He has long been a proponent of this cryptographic tech to help overcome the problem of scalability and privacy.

Not just crypto but also ZKPs will become essential in verifying whether AI-generated content is produced by AI models or not. Moreover, in the coming decades, ZKPs are likely to play a key role in enhancing efficiency, securing devices, and ensuring national security.

The Latest Developments

Over the past few years, zero-knowledge proofs have been widely used to scale Ethereum. Now, they are also recognized as a crucial element for unlocking Bitcoin’s programmability. Weikeng Chen, a PhD graduate from UC Berkeley and sponsored by infrastructure company Starkware, achieved the first implementation of a zero-knowledge verifier using Bitcoin script.

The historic milestone came after three months of exploring the possibilities of the technical proposal “OP_CAT” to expand Bitcoin’s capabilities by introducing smart contract functionality to the network. StarkWare’s ZK verifier marks the first large-scale practical application of the proposal’s opcode on the testnet, Bitcoin Signet.

“This was a tremendous effort and took a significant amount of time,” said Chen in an interview. “We started with nothing… We had to build the full stack, which eventually led to the implementation of the STARK verifier.”

While challenges remain, contributors at Starkware believe the project’s success represents “a monumental leap forward” towards Bitcoin scaling solutions that can use its ZK roll-up technology.

Other developments in the space include Polygon Labs’ latest version of its ZKP, Plonky3, which aims to enhance efficiency and security in distributed networks. While its previous versions had limitations in flexibility and adaptability, Plonky3 will empower developers to leverage ZK technology to build their own zkVM or zkEVM virtual machines.

With Plonky3, which has undergone auditing, the goal is to encourage innovation and community collaboration. Its versatility stems from its capability to adapt to various finite fields such as Goldilocks, BabyBear, and Mersenne31, as well as hash functions such as Keccak-256, BLAKE3, and Poseidon.

This development comes a month after Polygon Labs acquired the tech firm Toposware, the third ZK tech-based team. So far, the organization has invested $1 billion in zero-knowledge technology.

“ZK is easier, not only from a development perspective but for users and user experience too,” a Polygon spokesperson told local media.

These efforts by Polygon Labs aim to enhance interoperability. Yet, ZK-based technology struggles with compatibility issues within EVM networks and requires a Type 1 Prover to confirm the validity of a transaction to a blockchain. In response, Polygon Labs and Toposware are jointly developing such a Prover.

In April this year, VC giant Andreessen Horowitz (a16z) unveiled the release of its zero-knowledge virtual machine (zkVM) to help its portfolio companies scale their operations. a16z is also an investor in Matter Labs, a leading zkEVM maker.

ZK Proofs, according to the firm’s researcher and associate professor at Georgetown University, Justin Thaler, scale blockchains by doing the hard work off-chain. Not all nodes have to do all the work, but they get the guarantee that the work was done correctly.

The Future of Zero-Knowledge Tech

With scalability continuing to be a big challenge for blockchains, zero-knowledge technology has vast potential in the crypto space. For now, the implementation of ZK tech is in its early stages. However, the growing demand for privacy on public blockchains is expected to lead to growth and advancement in this technology.

The future of this tech focuses on prioritizing speed, improving developer tooling, reducing hardware requirements, enhancing flexibility, and widening support. With these advancements, we’ll truly see ZK’s transformative potential, leading to a more scalable and secure blockchain world.

About Shyft Network

Shyft Network powers trust on the blockchain and economies of trust. It is a public protocol designed to drive data discoverability and compliance into blockchain while preserving privacy and sovereignty. SHFT is its native token and fuel of the network.

Shyft Network facilitates the transfer of verifiable data between centralized and decentralized ecosystems. It sets the highest crypto compliance standard and provides the only frictionless Crypto Travel Rule compliance solution while protecting user data.

Visit our website to read more, and follow us on X (Formerly Twitter), GitHub, LinkedIn, Telegram, Medium, and YouTube. Sign up for our newsletter to keep up-to-date on all things privacy and compliance.

Zero-Knowledge: The Future of More Secure and Scalable Blockchain was originally published in Shyft Network on Medium, where people are continuing the conversation by highlighting and responding to this story.


Ockto

Hoe brondata het toekennen van duurzaamheidsleningen versnelt bij Warmtefonds

Zeven miljard euro aan leningen, zoveel wil het Warmtefonds voor 2030 verstrekken aan huishoudens en onderwijsinstellingen om te verduurzamen.

Zeven miljard euro aan leningen, zoveel wil het Warmtefonds voor 2030 verstrekken aan huishoudens en onderwijsinstellingen om te verduurzamen.


KuppingerCole

Omada Identity Cloud

by Nitish Deshpande This KuppingerCole Executive View report looks at Omada’s SaaS solution, Omada Identity Cloud. This report will highlight new features in the latest release and provide an overview of the IGA capabilities of the solution. The report concludes by outlining the strengths and challenges of the solution.

by Nitish Deshpande

This KuppingerCole Executive View report looks at Omada’s SaaS solution, Omada Identity Cloud. This report will highlight new features in the latest release and provide an overview of the IGA capabilities of the solution. The report concludes by outlining the strengths and challenges of the solution.

Thales Group

Accelerating eSIM adoption – sharing best practice for mobile operators

Accelerating eSIM adoption – sharing best practice for mobile operators Language English solange.balencie Tue, 07/23/2024 - 14:32 To discover the latest eSIM best practices and receive dedicated support for your deployment projects, schedule a meeting with a Thales expert today.  Click the button below to get started. Contact your l
Accelerating eSIM adoption – sharing best practice for mobile operators Language English solange.balencie Tue, 07/23/2024 - 14:32
To discover the latest eSIM best practices and receive dedicated support for your deployment projects, schedule a meeting with a Thales expert today. 

Click the button below to get started.

Contact your local sales representative   #1 - What is the current state of the eSIM market? Where is it heading next?

The eSIM (embedded SIM) has already made a significant impact in both the consumer and IoT (Internet of Things) markets. However, over the next 18 months to two years, we’ll see a real step change in momentum. This will encompass not just the sheer volume of eSIM-enabled products being launched, but the variety of devices involved. Numerous different smartphones, tablets, laptops, wearables and other consumer products featuring an eSIM will be introduced, along with a diverse array of IoT devices. 

Last year, Thales commissioned Mobile World Live to survey device makers to find out more about their plans. The results show that 74% of smartphone OEMs (Original Equipment Manufacturers) and ODMs (Original Device Manufacturers) that do not currently offer eSIM devices plan to do so in the next two years. What’s more, nearly 64% of smartphone OEMs and ODMs expect between 25% and 75% of their product portfolio to include an eSIM over the same period.

Looking beyond the smartphone, the Mobile World Live survey found that 70% of IoT OEMs that do not already sell eSIM devices plan to do so in the next two years. In addition, 39% of laptop and tablet OEMs expect over 75% of their products to be eSIM-enabled in the next three years.
 

#2 - What can we expect in terms of eSIM-only devices?

Apple’s decision to launch its US iPhone 14 and 15 models without a removeable SIM slot was a turning point for the eSIM ecosystem. In the US at least, mobile operators that want to support the latest models from the world’s second largest smartphone manufacturer need to embrace the eSIM.

Even more significantly, Apple was clearly signposting its direction of travel. Reflecting this, the company recently announced that eSIM-only iPads are being launched everywhere in the world, with the exception of China. 

Other smartphone OEMs were bound to follow in Apple’s footsteps, and the Mobile World Live survey points to a significant uptick in the near future: 53% of smartphone OEMs and ODMs plan to develop an eSIM-only product within two years. 

 

#3 - How are MNOs responding?

 Nearly 400 MNOs (Mobile Network Operators) worldwide supported eSIM connectivity as of June 2023, according to the GSMA. Given the stellar growth predicted for eSIM-enabled devices, we’ll undoubtedly see many more operators added to that list. Indeed, Thales is working with numerous MNOs to enable them to provide outstanding eSIM connectivity services. But while many progressive MNOs are now working hard to promote the benefits of the eSIM – and take advantage of the commercial opportunities on offer – we believe there is huge potential for more operators to do the same. These MNOs should look beyond simply providing the technical means to support the eSIM, and develop marketing strategies dedicated to this transformative technology.

 

#4 - What strategies are MNOs adopting?  How are operators promoting the benefits of an eSIM-based customer journey? Are eSIM-first strategies being pursued?

Working with MNOs around the world, we’re seeing an ever growing number of innovative and effective strategies being rolled out to promote eSIM connectivity to end users. Operators are using a range of incentives to encourage a switch from removeable SIM to eSIM in devices that support both solutions, including offering it free. Some operators are complementing these promotions with educational and informative campaigns that spell out the benefits of the eSIM via social media and websites. Operators are also beginning to take advantage of the unique characteristics of the eSIM in their promotions. For example, mobile operators can leverage the ability to activate multiple eSIM subscriptions simultaneously to launch ‘Family and Friends’ type bundles. 

There is massive scope for more operators to adopt these and other best practices and build a distinct brand offer around the eSIM. However, according to the Mobile World Live survey, only 20% of smartphone OEMs and ODMs have full visibility on how their devices are used in the market; greater insight would probably boost the pace of deployment and enable OEMs to fine-tune their products for the eSIM ecosystem. 

While 40% of consumers are aware of the eSIM, only 11% heard about it via a mobile operator, according to research by the GSMA.  These findings underscore the untapped potential of the eSIM. By boosting awareness of the eSIM among consumers, MNOs will accelerate adoption.    

 

#5 - What are the successful eSIM-based use cases and promotions?

Historically, mobile operators have expected customers to buy a subscription before they’ve experienced their mobile network service. But the eSIM can turn this on its head, by offering far greater flexibility and convenience via a fully digital experience. On-boarding a subscriber is easy and there’s no need to change provider. 

Several mobile operators are taking advantage of the eSIM’s greater flexibility by running novel ‘Try before you buy’ promotions.  Customers can ‘test drive’ a mobile network via their device’s eSIM, without having to cancel their existing subscription. As a result, MNOs can boost customer acquisition and maximise their conversion rates through a simple and seamless experience. One mobile operator has reported a 40% conversion rate using the ‘Try before you buy’ approach. 

Another successful use case is eSIM for Travel. Many MNOs are launching eSIM prepaid offers to provide connectivity to tourists visiting their country or region. Marketed through travel agencies and airports, as well as the MNO’s app and website, eSIM-based packages enable customers to buy connectivity and data at local rates before they depart for another country. That means travellers avoid the hassle of queuing at retail stores for prepaid connectivity after arriving at their destination. They also enjoy the cost savings and optimum experience provided by local connectivity. 

 

#6 - Overall, how should operators prepare for the eSIM revolution?  

MNOs should identify and implement a strategy that is specific to the eSIM. Rapid adoption of the eSIM is creating compelling new opportunities for operators to differentiate, build additional revenue streams, and enhance the user experience. Above all else, the eSIM provides MNOs with the platform to create a seamless experience at every stage of the subscriber lifecycle. At the front end, this means providing the best possible user experience, including a smooth, fully digital onboarding process and a simpler customer journey. Giving staff the right training will further boost customer care. At the back end, the eSIM again enables simpler and more cost effective processes. In particular, as MNOs deal with higher volumes of eSIM activations, as well as the growing diversity of both consumer and IoT devices running on 4G and 5G networks, managing subscription profile inventories will be mission critical in terms of maintaining service continuity and the user experience

Thales is at the heart of this new eSIM ecosystem. By sharing our expertise and experience of best practices we can help mobile operators to make the most of a new era in connectivity. Over 450 MNOs worldwide use our solutions, and we have strong relationships with over 100 OEMs in the consumer and IoT markets. This presence enables Thales to provide comprehensive solutions for all the key stakeholders. 

23 Jul 2024 Digital Identity and Security Mobile eSIM Mobile Our latest Q&A presents exclusive insights and best practices for Mobile Operators, offering a tailored approach to accelerating eSIM adoption. Type News Hide from search engines Off

Thales reports its 2024 half-year results

Thales reports its 2024 half-year results prezly Tue, 07/23/2024 - 07:00 Order intake: €10.8 billion, up 26% (+23% on an organic basis1) Order book: €47 billion, a new record high Sales: €9.5 billion, up 8.9% (+6.0% on an organic basis) EBIT2: €1,096 million, up 10.4% (+4.7% on an organic basis) Adjusted net income, Group share2: €866 million, up 6% Consolidated net inc
Thales reports its 2024 half-year results prezly Tue, 07/23/2024 - 07:00 Order intake: €10.8 billion, up 26% (+23% on an organic basis1) Order book: €47 billion, a new record high Sales: €9.5 billion, up 8.9% (+6.0% on an organic basis) EBIT2: €1,096 million, up 10.4% (+4.7% on an organic basis) Adjusted net income, Group share2: €866 million, up 6% Consolidated net income, Group share: €1,017 million, up 57% Free operating cash flow2,3: €23 million 2024 targets confirmed. Ranges narrowed:
Book-to-bill ratio4 above 15 Organic sales growth between +5% and +6%6 EBIT margin: 11.7% to 11.8%7

Thales’ Board of Directors (Euronext Paris: HO) met on July 22, 2024 to review the financial statements for the first half of 20248.

 

“We once again achieved strong sales growth in the first half of the year, with record orders including three contracts with a unit value in excess of €500 million. This reflects strong demand from our customers and the quality of the Group’s solutions.
​Organic sales growth came to +6%, thanks to the good performance of the Aeronautics and Defence & Security businesses.
​The Group‘s EBIT margin continued to rise, reaching a new all-time high for a first half, at 11.5%. In this context, we are continuing to invest to increase our production capacity and support the sustainable growth of our business. We are also accelerating our investments in innovation to strengthen our technological leadership. ​
​Other priorities for the year include the continued integration of recent acquisitions Imperva and Cobham Aerospace Communications. This integration is proceeding as planned. We are also committed to restoring sustainably profitability in the space business with the implementation of the Thales Alenia Space adaptation plan.
​We confirm our annual outlook and have refined it based on the improved visibility we have regarding the rest of the year.
​I would like to thank our 81,000 employees for their unwavering commitment to serving our customers.”
​Patrice Caine, Chairman & Chief Executive Officer

Key figures

Order intake in the first half of 2024 amounted to €10,767 million, up 26% from H1 2023 (+23% on an organic basis, i.e. at constant scope and exchange rates). The Group continued to record excellent sales momentum in most of its businesses. At June 30, 2024, the consolidated order book totaled €47 billion, up 16% compared to the first half of 2023, reaching a new all-time high.

Sales totaled €9,493 million, up 8.9% from H1 2023, and up 6.0% at constant scope and exchange rates. Sales growth was driven in particular by the robust performance of Avionics and Defence & Security.

In the first half of 2024, the Group posted EBIT of €1,096 million (11.5% of sales), compared to €993 million (11.4% of sales) in the first half of 2023, an increase of 10.4% (+4.7% on an organic basis).

At €866 million, adjusted net income, Group share rose by 6%, taking into account the increase in debt servicing following the acquisitions made over the past year.

Consolidated net income, Group share amounted to €1,017 million, up 57% compared to H1 2023, driven by the increase in adjusted net income and the capital gain on the disposal of the Transport activity.

Free operating cash flow from continuing operations11 (excluding the Transport business following its disposal), was positive at €23 million compared with €253 million in the first half of 2023. As announced, in the first half of 2024, the Group recorded a significant increase in its working capital requirement compared to the first half of 2023.

Net debt reached -€4,594 million at June 30, 2024 compared with -€4,190 million at December 31, 2023.

Order intake

Order intake in H1 2024 amounted to €10,767 million, up 26% compared to H1 2023 (+23% at constant scope and exchange rates13). The book-to-bill ratio was 1.13, a substantial increase compared to the first half of 2023, when it was 0.98.

This strong order intake was driven by large orders (orders with a unit value of more than €100 million), the cumulative amount of which came to €3,602 million, up 116% compared with the first half of 2023 (€1,671 million). Three orders in excess of €500 million were recorded during the period. ​ ​ ​ ​

Thales recorded a total of 12 large orders with a unit value of more than €100 million in the first half of 2024, compared to nine in the first half of 2023, as follows:

Four large orders booked in Q1 2024:
The entry into force of the third phase of the order placed by Indonesia in 2022 for the purchase of 42 Rafale aircraft (18 aircraft and support services); Order of an air surveillance system for a military customer in the Middle East; Second tranche of the contract signed in 2023 between France and Italy for the production of 400 ASTER B1NT ground-to-air missiles; Phased contract with the French Defence Procurement Agency (DGA) to develop the next generation of sonars to equip French nuclear-powered ballistic-missile submarines (SSBN). Eight large orders booked in Q2 2024:
Order of two new F126 frigates by the German Navy. This additional contract brings the number of F126 frigates acquired by the German Navy to six in the past four years; Exomars 2028, a contract signed between industrial prime contractor Thales Alenia Space and the European Space Agency (ESA) to relaunch the European space mission dedicated to the exploration of the Red Planet; Order by SKY Perfect JSAT to Thales Alenia Space of JSAT-31, a new generation of satellite reconfigurable in orbit using Space INSPIRE technology; Order by France’s Joint Munitions Command (SiMu) of tens of thousands of 120mm rifled ammunition; Order for a next generation cloud native "FLYTEDGE" InFlight Entertainment System for a major worldwide airline; Order by an Asian customer of latest-generation Ground Master 400 Alpha long-range air surveillance radars; Order by the Dutch Ministry of Defence of seven additional Ground Master 200 multi-mission compact radars; Service contract for the maintenance of the Royal Australian Navy fleet.

At €7,165m, order intake with a unit value of less than €100 million also increased by 4% compared to the first half of 2023.

Geographically14, order intake in emerging markets amounted to €3,439 million, up by +111%. At €7,328 million, order intake in mature markets continued to grow (+6%).

Order intake in the Aerospace segment totaled €2,688 million, versus €2,349 million in H1 2023 (+16% at constant scope and exchange rates). This double-digit increase reflects two contrasting trends:

On the one hand, the avionics market remains strong, with order intake recording organic growth of 31%; On the other hand, the order intake in the space business declined slightly over the period. The two major orders signed with the ESA and SKY Perfect JSAT in the second quarter of 2024 did not fully offset the weakness of the first quarter, which was penalized by a high basis for comparison.

At €6,120 million compared with €4,498 million in the first half of 2023, i.e. a 36% increase at constant scope and exchange rates, order intake in the Defence & Security segment reflected strong sales momentum. This segment recorded nine new orders with a unit value of more than €100 million in the first half of the year. The order book therefore increased to €36.5 billion (versus €30.7 billion in the first half of 2023), representing more than 3.7 years of sales.

At €1,931 million, order intake in the Digital Identity & Security segment was in line with sales, as most of the activities in this segment operate on short cycles. The order book is therefore not significant.

Sales

Sales for the first half of 2024 amounted to €9,493 million, compared with €8,716 million in the first half of 2023, an increase of 8.9% as reported, or +6.0% at constant scope and exchange rates.

Geographically15, this increase in sales was mainly driven by mature countries (+6.9% in organic growth). Emerging markets posted organic growth of +2.7% over the period.

Sales in the Aerospace segment amounted to €2,582 million, up 4.6% compared to H1 2023 (+4.8% at constant scope and exchange rates). This change reflects mixed trends:

Double-digit growth in aeronautics; Stable activity in the space segment.

Sales in the Defence & Security segment totaled €4,938 million, up 8.7% compared to H1 2023 (+8.5% at constant scope and exchange rates). After a very strong first quarter (+13.4% at constant scope and exchange rates), as expected, activity recorded more moderate growth in the second quarter (+4.5% at constant scope and exchange rates). Organic growth in this segment's sales over the half-year is ahead of the confirmed annual target of “mid-single digit plus” organic growth in activity.

In the Digital Identity & Security segment, sales were up 0.4% at constant scope and exchange rates to €1,934 million. Following a decline in the first quarter (-2.5% at constant scope and exchange rates), the second quarter came back to positive organic growth (+3.1%). The continued ramp-up of product digitalization (including connectivity solutions in mobile) and the good momentum of the Biometric and Cybersecurity businesses more than offset the slowdown in Banking and Payment solutions markets compared with a particularly high base in the first half of 2023 in terms of both volume and pricing.

Results

In H1 2024, the Group posted EBIT16 of €1,096 million (11.5% of sales), compared with €993 million (11.4% of sales) in H1 2023.

The Aerospace segment posted EBIT of €167 million (6.5% of sales), versus EBIT of €169 million (6.9% of sales) in H1 2023. The segment's EBIT margin was driven by the strong performance of avionics activities, which posted a double-digit EBIT margin. However, it was impacted by the negative EBIT margin in space, resulting from i) a lack of growth in the business and ii) an increase in Research and Development expenses.

In the Defence & Security segment, EBIT amounted to €639 million, versus €576 million in H1 2023 (+11.0% at constant scope and exchange rates). The segment’s margin was up slightly against last year, at 12.9% (12.7% in the first half of 2023).

At €272 million (14.1% of sales) compared with €246 million (14.7% of sales) in the first half of 2023, the EBIT margin of the Digital Identity & Security (“DIS”) segment was still strong. This illustrates DIS’ ability to maintain its commercial margins in the competitive markets of Banking and Payment Services and Mobile Connectivity Solutions.

Excluding Naval Group, unallocated EBIT amounted to -€26 million, compared with -€42 million in H1 2023.

At €44 million in the first half of 2024, Naval Group's contribution to EBIT was fully aligned with the one of last year.

As expected, cost of net financial debt (-€87 million compared with €13 million in the first half of 2023) rose sharply given the substantial increase in the amount of debt following recent acquisitions. Other adjusted financial income17 stood at +€32 million over the first six months of 2024, compared with -€13 million in the first half of 2023, reflecting the exceptional positive impact of dividend payments from non-consolidated investments as well as positive foreign exchange results. The adjusted financial expense on pensions and other long-term employee benefits17 improved from -€38 million to -€28 million in the first half of 2024, due to the removal of the interest expense following the transfer of pension obligations in the United Kingdom carried out in December 2023.

At €19 million compared with €36 million in H1 2023, the adjusted net income, Group share, from discontinued operations was in line with trends in the Transport business, which was sold on May 31, 2024.

Adjusted net income, Group share17 thus amounted to €866 million, compared to €819 million in H1 2023, after an adjusted income tax charge17 of -€193 million, compared to -€175 million in H1 2023. The effective tax rate stood at 20.4% at June 30, 2024, compared with 20.0% at June 30, 2023. ​

Adjusted net income, Group share, per share17 amounted to €4.21, up 8% compared to H1 2023 (€3.91).

Consolidated net income, Group share amounted to €1,017 million, up 57% compared to June 30, 2023 (€649 million), including the capital gain on the disposal of the Transport business.

Financial position at June 30, 2024

Free operating cash flow from continuing operations (excluding the Transport business following its disposal), was positive at €23 million compared with €253 million in the first half of 2023. As announced, in the first half of 2024, the Group recorded a significant increase in its working capital requirement compared to the first half of 2023, due to the voluntary build-up of inventories of products for which Thales wants to increase its resilience. The Group's debt servicing also rose sharply given the increase in the amount of the Group's net debt between end-June 2023 and end-June 2024.

In the first half of the year, the net balance of disposals and acquisitions of subsidiaries amounted to €528 million. The Group completed the sale of its Transport business to Hitachi Rail on May 31, 2024 and the acquisition of Cobham Aerospace Communications on April 2, 2024.

Under the share buyback program covering a maximum of 3.5% of the capital announced in March 2022, which ended on March 31, 2024, 1,245,757 shares were purchased between January 1, 2024 and June 30, 2024, for a total of -€176 million. Over the total period of the program, Thales bought back 7,469,396 shares, representing a gross buyback amount of -€966 million.

At June 30, 2024, net debt amounted to -€4,594 million compared with -€4,190 million at December 31, 2023. This item was mainly impacted by net balance of disposals and acquisitions of subsidiaries for a positive amount of €528 million (acquisition of Cobham Aerospace Communications and disposal of the Transport business), dividend payments for -€534million (-€468 million in the first half of 2023), new lease liabilities -€95 million (-€49 million in the first half of 2023) and the share buyback program for -€176 million.

Shareholders’ equity, Group share amounted to €7,283 million, compared with €6,830 million at December 31, 2023. This increase reflects the contribution of consolidated net income, Group share (+€1,017 million) less the dividend payout (-€534 million) and share buybacks (-€176 million).

Outlook

As illustrated by the record order intake in the first half of the year, Thales has a solid positioning in each of its major markets.

In the second half of 2024, Thales' sales should therefore continue to benefit from the good momentum of its activities.

With regard to the EBIT margin, Thales confirms its expectation of continuous growth in its profitability compared to 2023, driven by the double-digit profitability of the Aeronautics and Defence & Security activities. However, the EBIT margin of the Space business is expected to be negative for the full year, due to the combination of: i) the fall in commercial telecommunications activity over the year, ii) the Group’s decision to maintain a high level of Research and Development investments in a market where demand remains solid in the medium term, and iii) the impact of restructuring costs linked to the business adaptation plan.

As a result, assuming there are no major new disruptions in the global economy, in the health context or in the global supply chains, Thales confirms its 2024 annual objectives, as announced in March, refining them as follows:

A book-to-bill ratio unchanged, above 1; Organic sales growth of between +5% and +6%18, corresponding to sales in the range of €19.9 billion to €20.1 billion19; An EBIT margin between 11.7% and 11.8%20.

1 In this press release, “organic” means “at constant scope and exchange rates”. See note on methodology on page 11 and calculation on page 16.

2 Non-GAAP financial indicators, see definitions in the appendices, page 11.

3 Free operating cash flow from continuing operations, excluding the Transport activity sold on May 31, 2024.

4 Book-to-bill ratio: ratio of order intake to sales.

5 Target unchanged.

6 i.e. between €19.9 billion and €20.1 billion based on the scope and exchange rates of July 2024, compared with initial guidance of 4% to 6% announced in March 2024.

7 Compared with initial guidance of 11.7% to 12% announced in March 2024.

8 At the date of this press release, the limited review of the financial statements has been completed and the statutory auditors’ report has been issued following the meeting of the Board of Directors.

9 Non-GAAP financial indicators, see definitions in the appendices, page 11.

10 “Cash and cash equivalents” no longer includes the assets transferred from the pension fund in the United Kingdom.

11 Free operating cash flow from continuing operations, excluding the Transport activity sold on May 31, 2024.

12 Mature markets: Europe, North America, Australia, New Zealand. Emerging markets: all other countries. See table on page 15.

13 ​ Taking into account a scope effect of €263 million and a negative currency scope effect of -€17 million.

14 See table on page 15.

15 Mature markets: Europe, North America, Australia, New Zealand. Emerging markets: all other countries. See table on page 15.

16 Non-GAAP financial indicator, see definition in the appendices on page 11 and the calculation on pages 13 and 14.

17 Non-GAAP financial indicator, see definition in the appendices on page 11 and the calculation on pages 13 and 14.

18 Compared with +4% and +6% initially announced.

19 Based on July 2024 scope and exchange rates.

20 Compared to the range of 11.7% to 12.0% initially announced.

This press release contains certain forward-looking statements. Although Thales believes that its expectations are based on reasonable assumptions, actual results may differ significantly from the forward-looking statements due to various risks and uncertainties, as described in the Company's Universal Registration Document, which has been filed with the French financial markets authority (Autorité des marchés financiers – AMF).

/sites/default/files/prezly/images/sans%20A-1920x480px_36.jpg Documents [Prezly] Thales reports its 2024 half-year results - Press Release - 23 July 2024.pdf [Prezly] Thales - Condensed consolidated financial statements at 30 June 2024.pdf [Prezly] Thales - 2024 H1 - slideshow_23 July 2024.pdf Contacts Head of Media Relations Alexandra Boucheron - Thales, Analysts/Investors 23 Jul 2024 Type Press release Structure Investors Group Thales’ Board of Directors (Euronext Paris: HO) met on July 22, 2024 to review the financial statements for the first half of 20248. prezly_676224_thumbnail.jpg Hide from search engines Off Prezly ID 676224 Prezly UUID 49886cc1-eed0-4ebb-aa7e-9274aed5c7fc Prezly url https://thales-group.prezly.com/thales-reports-its-2024-half-year-results Tue, 07/23/2024 - 09:00 Don’t overwrite with Prezly data Off

Spruce Systems

Provably Forgotten Signatures: Adding Privacy to Digital Identity

We can enhance existing digital identity systems to support an important privacy feature known as “unlinkability:” sharing attributes without attribution.

Thank you to Ryan Hurst (SpruceID Advisor, former Google/Microsoft), Dan Boneh (Stanford), Abhi Shelat (Google/Northeastern), Foteini Baldimtsi (GMU), and Dick Hardt (Hellō) for reviewing the technical approach in this article, and providing several suggestions which improved the work. 

At SpruceID, our mission is to let users control their data across the web. We build systems based on Verifiable Digital Credentials (VDCs) to make the online and offline worlds more secure, while protecting the privacy and digital autonomy of individuals.

Developing models to implement this VDC future requires carefully thinking through every risk of the new model–including risks in the future. One of the edge-case risks privacy researchers have identified is sometimes known as “linkability.”

Linkability refers to the possibility of profiling people by collating data from their use of digital credentials. This risk commonly arises when traceable digital signatures or identifiers are used repeatedly, allowing different parties to correlate many interactions back to the same individual, thus compromising privacy. This can create surveillance potential across societies, whether conducted by the private sector, state actors, or even foreign adversaries.

In this work, we explore an approach that adds privacy by upgrading existing systems to prevent linkability (or “correlation”) and instead of overhauling them entirely. It aims to be compatible with already-deployed implementations of digital credential standards such as ISO/IEC 18013-5 mDL, SD-JWT, and W3C Verifiable Credentials, while also aligning with cryptographic security standards such as FIPS 140-2/3. It is compatible with and can even pave the way for future privacy technologies such as post-quantum cryptography (PQC) or zero-knowledge proofs (ZKPs) while unlocking beneficial use cases today.

Why This Matters Now 

Governments are rapidly implementing digital identity programs. In the US, 13 states already have live mobile driver’s license (mDL) programs, with over 30 states considering them, and growing. Earlier this year, the EU has approved a digital wallet framework which will mandate live digital wallets across its member states by 2026. This is continuing the momentum of the last generation of digital identity programs with remarkable uptake, such as India’s Aadhaar which is used by over 1.3 billion people. However, it is not clear that these frameworks plan for guarantees like unlinkability in the base technology, yet the adoption momentum increases.

Some think that progress on digital identity programs should stop entirely until perfect privacy is solved. However, that train has long left the station, and calls to dismantle what already exists, has sunk costs, and seems to function may fall on deaf ears. There are indeed incentives for the momentum to continue: demands for convenient online access to government services or new security systems that can curb the tide of AI-generated fraud. Also, it’s not clear that the best approach is to design the “perfect” system upfront, without the benefit of iterative learning from real-world deployments.

In the following sections, we examine two privacy risks that may already exist in identity systems today, and mitigation strategies that can be added incrementally.

Digital ID Risk: Data Linkability via Collusion

One goal for a verifiable digital credential system is that a credential can be used to present only the necessary facts in a particular situation, and nothing more. For instance, a VDC could prove to an age-restricted content website that someone is over a certain age, without revealing their address, date of birth, or full name. This ability to limit disclosures allows the use of functional identity, and it’s one big privacy advantage of a VDC system over today’s identity systems that store a complete scan of a passport or driver’s license. However, even with selective disclosure of data fields, it is possible to unintentionally have those presentations linkable if the same unique values are used across verifiers.

In our example, if a user proves their age to access an age-restricted content website (henceforth referred to simply as “content website”), and then later verifies their name at a bank, both interactions may run the risk of revealing more information than the user wanted if the content website and bank colluded by comparing common data elements they received. Although a check for “over 18 years old” and a name don’t have any apparent overlap, there are technical implementation details such as digital signatures and signing keys that, when reused across interactions, can create a smoking gun.

Notably, the same digital signature is uniquely distinguishable, and also new signatures made from the same user key can be correlated. This can all work against the user to reveal more information than intended.

Verifier-Verifier Collusion

To maximize privacy, these pieces of data presented using a VDC should be “unlinkable.” For instance, if the same user who’d proven their age at a content website later went to a bank and proved their name, no one should be able to connect those two data points to the same ID holder, not even if the content website and the bank work together. We wouldn’t want the bank to make unfair financial credit decisions based on the perceived web browsing habits of the user.

However, VDCs are sometimes built on a single digital signature, a unique value that can be used to track or collate information about a user if shared repeatedly with one or more parties. If the content website in our example retains the single digital signature created by the issuing authority, and that same digital signature was also shared with the bank, then the content website and the bank could collude to discover more information about the user than what was intended.

The case where two or more verifiers of information can collude to learn more about the user is known as verifier-verifier collusion and can violate user privacy. While a name-age combination may seem innocuous, a third-party data collector could, over time, assemble a variety of data about a user simply by tracking their usage of unique values across many different verifiers, whether online or in-person. At scale, these issues can compound into dystopian surveillance schemes by allowing every digital interaction to be tracked and made available to the highest bidders or an unchecked central authority.

Cycling Signatures to Prevent Verifier-Verifier Collusion

Fortunately, a simple solution exists to help prevent verifier-verifier collusion by cycling digital signatures so that each is used only once. When a new VDC is issued by a post office, DMV, or other issuer, it can be provisioned not with a single signature from the issuing authority that produces linkable usage, but with many different signatures from the issuing authority. If user device keys are necessary for using the VDC, as in the case of mobile driver’s licenses, several different keys can be used as well. A properly configured digital wallet would then use a fresh signature (and potentially a fresh key) every time an ID holder uses their VDC to attest to particular pieces of information, ideally preventing linkage to the user through the signatures.

Using our earlier example of a user who goes to a content website and uses their VDC to prove they are over 18, the digital wallet presents a signature for this interaction, and doesn’t use that signature again. When the user then visits their bank and uses a VDC to prove their name for account verification purposes, the digital wallet uses a new signature for that interaction.

Because the signatures are different across each presentation, the content website and the bank cannot collude to link these two interactions back to the same user without additional information. The user can even use different signatures every time they visit the same content website, so that the content website cannot even tell how often the user visits from repeated use of their digital ID.

Issuer-Verifier Collusion

A harder problem to solve is known as “issuer-verifier” collusion. In this scenario, the issuer of an ID–or, more likely, a rogue agent within the issuing organization–remembers a user’s unique values (such as keys or digital signatures) and, at a later time, combines them with data from places where those keys or signatures are used. This is possible even in architectures without “phone home” because issuing authorities (such as governments or large institutions) often have power over organizations doing the verifications, or have been known to purchase their logs from data brokers. Left unsolved, the usage of digital identity attributes could create surveillance potential, like leaving a trail of breadcrumbs that can be used to re-identify someone if recombined with other data the issuer retains.

Approaches Using Zero-Knowledge Proofs

Implementing advanced cryptography for achieving unlinkability, such as with Boneh–Boyen–Shacham (BBS) signatures in decentralized identity systems, has recently gained prominence in the digital identity community. These cryptographic techniques enable users to demonstrate possession of a signed credential without revealing any unique, correlatable values from the credentials.

Previous methods like AnonCreds and U-Prove, which rely on RSA signatures, paved the way for these innovations. Looking forward, techniques such as zk-SNARKs, zk-STARKs, which when implemented with certain hashing algorithms or primitives such as lattices can support requirements for post-quantum cryptography, can offer potential advancements originating from the blockchain ecosystem.

However, integrating these cutting-edge cryptographic approaches into production systems that meet rigorous security standards poses challenges. Current standards like FIPS 140-2 and FIPS 140-3, which outline security requirements for cryptographic modules, present compliance hurdles for adopting newer cryptographic algorithms such as the BLS 12-381 Curve used in BBS and many zk-SNARK implementations. High assurance systems, like state digital identity platforms, often mandate cryptographic operations to occur within FIPS-validated Hardware Security Modules (HSMs). This requirement necessitates careful consideration, as implementing these technologies outside certified HSMs could fail to meet stringent security protocols.

Moreover, there's a growing industry shift away from RSA signatures due to concerns over their long-term security and increasing emphasis on post-quantum cryptography, as indicated by recent developments such as Chrome's adoption of post-quantum ciphers.

Balancing the need for innovation with compliance with established security standards remains a critical consideration in advancing digital identity and cryptographic technologies.

A Pragmatic Approach for Today: Provably Forgotten Signatures

Given the challenges in deploying zero-knowledge proof systems in today’s production environments, we are proposing a simpler approach that, when combined with key and signature cycling, can provide protection from both verifier-verifier collusion and issuer-verifier collusion by using confidential computing environments: the issuer can forget the unique values that create the risk in the first place, and provide proof of this deletion to the user. This is implementable today, and would be supported by existing hardware security mechanisms that are suitable for high-assurance environments.

It works like this:

During the final stages of digital credential issuance, all unique values, including digital signatures, are exclusively processed in plaintext within a Trusted Execution Environment (TEE) of confidential computing on the issuer’s server-side infrastructure. Issuer-provided data required for credential issuance, such as fields and values from a driver’s license, undergoes secure transmission to the TEE. Sensitive user inputs, such as unique device keys, are encrypted before being transmitted to the TEE. This encryption ensures that these inputs remain accessible only within the secure confines of the TEE. Within the TEE, assembled values from both the issuer and user are used to perform digital signing operations. This process utilizes a dedicated security module accessible solely by the TEE, thereby generating a digital credential payload. The resulting digital credential payload is encrypted using the user’s device key and securely stored within the device’s hardware. Upon completion, an attestation accompanies the credential, verifying that the entire process adhered to stringent security protocols.

This approach ensures:

Protection Against Collusion: By employing confidential computing and strict segregation of cryptographic operations within a TEE, the risk of verifier-verifier and issuer-verifier collusion is mitigated. Privacy and Security: User data remains safeguarded throughout the credential issuance process, with sensitive information encrypted and managed securely within trusted hardware environments. Compliance and Implementation: Leveraging existing hardware security mechanisms supports seamless integration into high-assurance environments, aligning with stringent regulatory and security requirements.

By prioritizing compatibility with current environments instead of wholesale replacement, we propose that existing digital credential implementations, including mobile driver’s licenses operational in 13 states and legislatively approved in an additional 18 states, could benefit significantly from upgrading to incorporate this technique. This upgrade promises enhanced privacy features for users without necessitating disruptive changes.

New Approach, New Considerations

However, as with all new approaches, there are some considerations when using this one as well. We will explore a few of them, but this is not an exhaustive list.

The first consideration is that TEEs have been compromised in the past, and so they are not foolproof. Therefore, this approach is best incorporated as part of a defense-in-depth strategy, where there are many layered safeguards against a system failure. Many of the critical TEE failures have resulted from multiple things that go wrong, such as giving untrusted hosts access to low-level system APIs in the case of blockchain networks, or allowing arbitrary code running on the same systems in the case of mobile devices.

One benefit of implementing this approach within credential issuer infrastructures is that the environment can be better controlled, and so more forms of isolation are possible to prevent these kinds of vulnerability chaining. Issuing authorities are not likely to allow untrusted hosts to federate into their networks, nor would they allow arbitrary software to be uploaded and executed on their machines. There are many more environmental controls possible, such as intrusion detection systems, regular patching firmware, software supply chain policies, and physical security perimeters.

We are solving the problem by shifting the trust model: the wallet trusts the hardware (TEE manufacturer) instead of the issuing authority.

Another consideration is that certain implementation guidelines for digital credentials recommend retention periods for unique values for issuing authorities. For example, AAMVA’s implementation guidelines include the following recommendations for minimum retention periods: 

Source: AAMVA Mobile Driver's License Implementation Guidelines, r1.2

To navigate these requirements, it is possible to ensure that the retention periods are enforced within the TEE by allowing for deterministic regeneration of the materials only during a fixed window when requested by the right authority. The request itself can create an auditable trail to ensure legitimate usage. Alternatively, some implementers may choose to override (or update) the recommendations to prioritize creating unlinkability over auditability of certain values that may be of limited business use.

A third consideration is increased difficulty for the issuing authority to detect compromise of key material if they do not retain the signatures in plaintext. To mitigate this downside, it is possible to use data structures that are able to prove set membership status (e.g., was this digital signature issued by this system?) without linking to source data records or enumeration of signatures, such as Merkle trees and cryptographic accumulators. This allows for the detection of authorized signatures without creating linkability. It is also possible to encrypt the signatures so that only the duly authorized entities, potentially involving judicial processes, can unlock the contents.

Paving the Way for Zero-Knowledge Proofs

We believe that the future will be built on zero-knowledge proofs that support post-quantum cryptography. Every implementation should consider how it may eventually transition to these new proof systems, which are becoming faster and easier to use and can provide privacy features such as selective disclosure across a wide variety of use cases.

Already, there is fast-moving research on using zero-knowledge proofs in wallets to demonstrate knowledge of unique signatures and possibly the presence of a related device key for payloads from existing standards such as ISO/IEC 18013-5 (mDL), biometric templates, or even live systems like Aadhar. In these models, it’s possible for the issuer to do nothing different, and the wallet software is able to use zero-knowledge cryptography with a supporting verifier to share attributes without attribution.

These “zero-knowledge-in-the-wallet” approaches require both the wallet and the verifier to agree on implementing the technology, but not the issuer. The approach outlined in this work requires only the issuer to implement the technology. They are not mutually exclusive, and it is possible to have both approaches implemented in the same system. Combining them may be especially desirable when there are multiple wallets and/or verifiers, to ensure a high baseline level of privacy guarantee across a variety of implementations.

However, should the issuer, wallet, and verifier (and perhaps coordinating standards bodies such as the IETF, NIST, W3C, and ISO) all agree to support the zero-knowledge approach atop quantum-resistant rails, then it’s possible to move the whole industry forward while smoothing out the new privacy technology’s rough edges. This is the direction we should go towards as an industry.

Tech Itself is Not Enough

While these technical solutions can bring enormous benefits to baseline privacy and security, they must be combined with robust data protection policies to result in safe user-controlled systems. If personally identifiable information is transmitted as part of the user’s digital credential, then by definition they are correlatable and privacy cannot be addressed at the technical protocol level, and must be addressed by policy.

For example, you can’t unshare your full name and date of birth. If your personally identifiable information was sent to an arbitrary computer system, then no algorithm on its own can protect you from the undercarriage of tracking and surveillance networks. This is only a brief sample of the kind of problem that only policy is positioned to solve effectively. Other concerns range from potentially decreased accessibility if paper solutions are no longer accepted, to normalizing the sharing of digital credentials towards a “checkpoint society.”

Though it is out of scope of this work, it is critical to recognize the important role of policy to work in conjunction with technology to enable a baseline of interoperability, privacy, and security.

The Road Ahead

Digital identity systems are being rolled out in production today at a blazingly fast pace. While they utilize today’s security standards for cryptography, their current deployments do not incorporate important privacy features into the core system. We believe that ultimately we must upgrade digital credential systems to post-quantum cryptography that can support zero-knowledge proofs, such as ZK-STARKs, but the road ahead is a long one given the timelines it takes to validate new approaches for high assurance usage, especially in the public sector.

Instead of scorching the earth and building anew, our proposed approach can upgrade existing systems with new privacy guarantees around unlinkability by changing out a few components, while keeping in line with current protocols, data formats, and requirements for cryptographic modules. With this approach, we can leave the door open for the industry to transition entirely to zero-knowledge-based systems. It can even pave the path for them by showing that it is possible to meet requirements for unlinkability, so that when policymakers review what is possible, there is a readily available example of a pragmatic implementation. 

We hope to collaborate with the broader community of cryptographers, public sector technologists, and developers of secure systems to refine our approach toward production usage. Specifically, we wish to collaborate on:

Enumerated requirements for TEEs around scalability, costs, and complexity to implement this approach, so that commercial products such as Intel SGX, AMD TrustZone, AWS Nitro Enclaves, Azure Confidential Computing, IBM Secure Execution, or Google Cloud Confidential Computing can be considered against those requirements. A formal paper with rigorous evaluation of the security model using data flows, correctness proofs, protocol fuzzers, and formal analysis. Prototyping using real-world credential formats, such as ISO/IEC 18013-5/23220-* mdocs, W3C Verifiable Credentials, IMS OpenBadges, or SD-JWTs. Evaluation of how this approach meets requirements for post-quantum cryptography. Drafting concise policy language that can be incorporated into model legislation or agency rulemaking to create the requirement for unlinkability where deemed appropriate.

If you have any questions or interest in participation, please get in touch. I will be turning this blog post into a paper by adding reviews of related work, explanations, and some other key sections.

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions. We believe that instead of people signing into platforms, platforms should sign into people’s data vaults. Our products power privacy-forward verifiable digital credentials solutions for businesses and governments, with initiatives like state digital identity (including mobile driver’s licenses), digital learning and employment records, and digital permits and passes.

Monday, 22. July 2024

auth0

A Developer’s Journey with User Authentication: Species360

How Auth0 enhances this nonprofit startup’s use case
How Auth0 enhances this nonprofit startup’s use case

Extrimian

DWN free Community Node for Data Management

At Extrimian, we use Decentralized Web Nodes (DWNs) to enhance data management and security. Our approach ensures that sensitive information is accessible when needed, especially in critical situations where immediate and active authorization is not possible. This blog post will explain how to use DWNs to manage access to private information, highlighting our solutions and […] The post DWN free

At Extrimian, we use Decentralized Web Nodes (DWNs) to enhance data management and security. Our approach ensures that sensitive information is accessible when needed, especially in critical situations where immediate and active authorization is not possible.

This blog post will explain how to use DWNs to manage access to private information, highlighting our solutions and the vital role of authorization protocols in emergency scenarios.

Emergency Access Use Case

Imagine being involved in an accident and losing consciousness. Emergency responders need instant access to your medical records to provide optimal care. With a DWN setup, this can be efficiently managed through pre-authorized access to specific health information for emergency services.

Authorization Protocols: Pre-Defined Permissions: Users can pre-authorize emergency services to access their medical records stored in DWNs. This is facilitated by granting access based on credentials issued by recognized authorities, verifying the identity and purpose of the requester. Seamless Data Access: Once authenticated, emergency personnel can swiftly access the necessary data, ensuring they have the information required to treat you effectively, even in the absence of active authorization. Additional Scenarios for Authorization Protocols

The versatility of DWNs extends beyond emergency scenarios, offering pre-authorized access in various other contexts:

Health Applications: Health monitoring applications can periodically access medical data to provide personalized recommendations, thanks to pre-set authorization protocols. Financial Services: Financial advisors or applications can securely access financial records for tasks like tax filing or investment management without requiring repeated manual permissions. Travel and Safety: Travelers can authorize applications to access travel documents, vaccination records, and emergency contacts, ensuring assistance is readily available when needed. Detailed Functionality of Extrimian’s DWN Technology Decentralized Data Storage

DWNs provide a decentralized storage mechanism, where data is encrypted and distributed across various nodes, ensuring no single point of failure. This architecture enhances data security and availability.

Granular Access Control

Users have fine-grained control over who can access their data and under what conditions. Authorization protocols can be tailored to specific use cases, ensuring that only authenticated and authorized entities can access sensitive information.

Interoperability

Extrimian’s solutions are designed to be interoperable with various identity management systems and applications. This ensures seamless integration and functionality across different platforms and services.

Scalability

Our DWN implementation is highly scalable, capable of handling large volumes of data and numerous access requests without compromising performance or security.

Case Study: Secure Health Records Management

In our health records management use case, Extrimian leverages DWNs to securely store and manage patients’ medical information. Patients can authorize healthcare providers to access their records using credentials issued by recognized authorities. This setup ensures that healthcare providers can access the necessary information quickly and securely, enhancing patient care and operational efficiency.

Balancing Privacy and Accessibility

A core challenge in data management is balancing privacy with accessibility. At Extrimian, we ensure that all data within DWNs is encrypted and user-managed, offering and increasing data privacy. However, our authorization protocols allow predefined entities to access specific information under specified conditions, providing a seamless balance between privacy and necessary access. These protocols define authentication requirements and the precise data accessible to authenticated entities, ensuring that information remains private yet available when required.

Extrimian’s Technology and Solutions

Extrimian uses DWNs to empower users with greater control over their data while enhancing the ability to respond to various critical situations. Our solutions integrate advanced authorization protocols to ensure essential information is accessible without compromising privacy or security. For more information on how Extrimian’s technology supports secure and efficient data management, visit Extrimian.io and explore our academy at Extrimian Academy.

Extrimian’s Vision for Scalable Decentralized Digital Identity Solutions

Extrimian envisions a future where decentralized digital identity (SSI) solutions are scalable and widely adopted. To achieve this, we actively create alliances with governments and private sectors, fostering mass adoption and building a critical mass of SSI technology users. A prime example of this effort is our collaboration with the Government of Buenos Aires City, leading to the development of the QuarkID protocol.

QuarkID Protocol: A Model for Interoperability

The QuarkID protocol is designed to be a foundational layer upon which other protocols can be built, enabling seamless interoperability. This protocol provides a robust framework that can be adapted by various private sectors and service industries, enhancing data security, user experience, and privacy.

Alliances for Enhanced Security and Privacy

To ensure the highest standards of security and privacy, Extrimian partners with niche experts and leading crypto technology firms. For instance, we collaborate with zkSync for implementing Zero Knowledge Proof (ZKP) systems. These alliances enable us to incorporate advanced cryptographic techniques, ensuring that users’ data remains secure and private.

Benefits for Users and Citizens

By integrating these technologies and forming strategic alliances, Extrimian delivers a decentralized digital identity system that is more secure, interoperable, and user-friendly. Citizens benefit from enhanced privacy, improved data security, and a better overall user experience.

DIF announces DWN Community Node

In the article DWNs: The Next Frontier in Decentralized Identity, the Decentralized Identity Foundation (DIF) introduces the concept of Decentralized Web Nodes (DWNs) as a pivotal development in the landscape of decentralized identity. DWNs enable secure, private data storage and exchange without relying on centralized intermediaries, thus enhancing user control and privacy. The article highlights the role of DWNs in fostering interoperability and scalability within decentralized ecosystems, underscoring their potential to revolutionize data management across various sectors. This innovation is crucial for the advancement of Self-Sovereign Identity (SSI) systems, offering a robust infrastructure for future decentralized applications.

Web3 DWN free community node now available
Source: Decentrlized Identity Foundation Website: https://blog.identity.foundation/dwn-community-node/ Extrimian and DIF Partnership

Extrimian is a proud member of the Decentralized Identity Foundation (DIF), collaborating to advance the development and adoption of decentralized identity technologies. This partnership enhances our commitment to providing cutting-edge solutions in data security and decentralized identity.

As a result of this collaboration, Extrimian and DIF are launching a virtual Self-Sovereign Identity (SSI) training called HackAlong this August. For more information, visit this link:

Extrimian and DIF
Educational Resources

To deepen your understanding of DWNs and how they are transforming data management, Extrimian Academy offers comprehensive courses. In our course on self-sovereign identity, you can find two informative videos specifically focused on DWNs:

Introduction to Self-Sovereign Identity (SSI) Understanding DWNs in Practice

These videos provide a thorough overview of DWN technology, its implementation, and practical applications in data security.

Also,for further details on our secure health records management and other use cases, check out the Extrimian Use Cases.

Conclusion

Extrimian’s deployment of DWNs not only provides users with enhanced control over their data but also ensures that critical information is accessible during emergencies through robust authorization protocols. By balancing privacy and accessibility, our solutions deliver security and efficiency when it matters most. Visit Extrimian.io to learn more about how we are transforming data management with DWNs.

Document for Further Information

For more in-depth information on this topic, please visit our Extrimian Product Page that offers additional insights into our DWN technology and its applications in emergency access and data management.

The post DWN free Community Node for Data Management first appeared on Extrimian.


auth0

New Auth0 Integration for Vercel: Available Now

Get started fast with two best-in-class developer platforms: Auth0 and Vercel
Get started fast with two best-in-class developer platforms: Auth0 and Vercel

Thales Group

Thales eSIM Seminar: Driving Innovation and Collaboration in the Asian Market

Thales eSIM Seminar: Driving Innovation and Collaboration in the Asian Market Language English solange.balencie Mon, 07/22/2024 - 16:03 Thales presented the inaugural Thales eSIM Seminar, Asian edition, in Bali from the 4th to the 6th of June 2024, marking a pivotal moment for the evolution of eSIM technology in the region. This groundbreaki
Thales eSIM Seminar: Driving Innovation and Collaboration in the Asian Market Language English solange.balencie Mon, 07/22/2024 - 16:03

Thales presented the inaugural Thales eSIM Seminar, Asian edition, in Bali from the 4th to the 6th of June 2024, marking a pivotal moment for the evolution of eSIM technology in the region. This groundbreaking event united major players in connectivity, device manufacturing, leading industry analysts, creating a collaborative and inclusive environment with the aim of igniting a revolution in the adoption and use of eSIM technology.

 

During the enlightening sessions, the significance of the Asian market was a prevailing theme. Pablo Iacopino, Head of Research at GSMA, shared an insightful perspective:

“Asia is a big and important market for eSIM, and there are two ways to look at this: the consumer perspective and the supply side of things. From a customer perspective, Asia is a big market for consumers, big market for IoT. In 2025, Asia Pacific, the region, will become the biggest market for eSIM. From a supply side of things, a number of OEMs are from Asia, so Asia has the opportunity, the scale, the power to drive eSIM adoption and eSIM innovation, not only in terms of devices but also in terms of use cases.”

 

As the event unfolded, the focus on practical case studies from industry leaders and telecom operators, such as Singtel and Globe, showcased the tangible benefits of eSIM technology in meeting diverse consumer needs and preferences. Industry leaders, telecom operators, and stakeholders came together, sharing experiences and strategies aimed at refining and expanding eSIM deployment plans. 

 

Singtel launched their eSIM service for travellers back in September 2023, with a focus on the tourist segment in Singapore. They aimed to offer convenience and security to tourists, allowing them to choose telecom services without having to queue at the airport.

“They could conveniently choose and pick up the plans they wanted at any given time. We wanted to offer various options and the flexibility to keep their local number, stay connected, and have access to online check-ins and subscriptions. It was all about giving our customers convenience and a memorable experience, making them feel that Singapore offers world-class digital services.” 

Stated Shilpa Aggarwal, Vice President of Product Marketing at Singtel.

Globe presented the prepaid market and their understanding of their customers' eco-conscious, digital-centric, and tech-driven lifestyle post-pandemic. They aim to enhance this lifestyle further through their transition to eSIM.  Givielle Florida, Prepaid Brand Head at Globe Telecom, emphasized the impact of eSIM adoption on their digital journey:

“We chose to launch the eSIM for Globe prepaid exclusively via GlobeOne app. GlobeOne is Globe's primary digital channel wherein our customers can buy promos, manage their accounts, earn and redeem their rewards, and even experience our engagement programs. What better way to actually experience the entire digital journey of Globe via eSIM as a starting point?”

These case studies demonstrated the tangible benefits of eSIM technology in addressing diverse consumer needs and preferences.
Mohit Agrwal, Counterpoint's Digital Transformation & IoT expert , also emphasized the revolutionary impact of eSIM on IoT, and its potential to be a game changer in the coming years in conjunction of the new standards:

“I think SGP 31/32 will be revolutionary for IoT” “ With the new standards coming in, it will impact and it will make it easy for the customers to move from one connectivity provider to another”.

The seminar also welcomed global perspectives, with industry insights from Verizon providing invaluable experiences from the mature eSIM market in the USA. Furthermore, ZTE showcased their efforts in the Chinese market, leveraging Thales solutions to drive eSIM adoption.
 
The culmination of the event was a dynamic workshop, igniting thought-provoking discussions aimed at fostering an environment conducive to boosting eSIM adoption in the region. This workshop laid the essential groundwork for a concerted push toward a future where eSIM becomes seamlessly integrated into the diverse tapestry of the Asian market. We would like to thank all the participants for their valuable contributions and active participation in making the seminar a great success!

“The seminar's emphasis on practical applications, global perspectives, and collaborative dialogue sets it apart as a primary platform for driving the eSIM revolution in Asia. ”
said Guillaume Lafaix, VP of Connectivity Solutions and Embedded Products at Thales.

 

/sites/default/files/database/assets/images/2024-07/esim-banner-bali_0.jpg 22 Jul 2024 Digital Identity and Security Mobile eSIM IoT Mobile Celebrating a pivotal moment, Thales hosted the groundbreaking inaugural Thales eSIM Seminar in Bali from June 4th to 6th, 2024, uniting industry leaders to revolutionize eSIM technology in Asia Type News Hide from search engines Off

Holochain

Volla Partnership Announcement

Holochain on Mobile

You might have already heard about Volla, but today we are excited to make it official. We are announcing our partnership with Volla Systeme GmbH (Volla for short). A privacy-first mobile phone company with their own Android based OS, Volla is preparing to ship their new Quintus phone this fall with Holochain applications pre-installed.

Relay

Built with Holochain, the secure messenger app "Relay" will offer 1-to-1 chats and group chats with secure authentication, allowing you to share photos and other files — all 100% encrypted. Users connect to each other via public key, providing a decentralized and secure system of identifiers. Relay is entering beta testing this August.

Recover

Recover is a Holochain based application allowing users to create encrypted backups of their data without a cloud provider. Enabling (selective) recovery in the event of theft or defect, it automatically and incrementally backs up Volla phones.

Reinvent 

Volla is reinventing what it means to hold a mobile phone in your hands, and we are glad to be a part of that. These are just the first Holochain apps to be built for Volla, but as a partnership committed to privacy first, distributed tech, we expect that there will be many more. Holochain on mobile is a huge leap, and we are thankful to Volla for blazing this trail.

Get Involved

Volla will be doing a pre-sale of the Quintus via Kickstarter. Keep your eyes on our socials to learn more.


KuppingerCole

Eviden DirX Audit

by Nitish Deshpande This KuppingerCole Executive View report looks at Eviden’s DirX Audit solution from its DirX portfolio. DirX Audit is an analytics and audit intelligence solution that stores historical identity data and recorded events from the IAM processes. This collection of data allows it to also provide insights into access risks and reporting it through user friendly interface.

by Nitish Deshpande

This KuppingerCole Executive View report looks at Eviden’s DirX Audit solution from its DirX portfolio. DirX Audit is an analytics and audit intelligence solution that stores historical identity data and recorded events from the IAM processes. This collection of data allows it to also provide insights into access risks and reporting it through user friendly interface.

DHIWay

Tarento and Dhiway collaborate to Foster Trust and Certainty in the Digital Landscape

Tarento Technologies Private Ltd., an IT services company providing data and AI Services and Dhiway, a leading provider of enterprise Web 3.0 trust infrastructure, are excited to announce a partnership to collaborate on the development of scale-out applications and solutions with the CORD Blockchain framework, As a Nordic-Indo IT Services company, Tarento has digitized business […] The post Tare

Tarento Technologies Private Ltd., an IT services company providing data and AI Services and Dhiway, a leading provider of enterprise Web 3.0 trust infrastructure, are excited to announce a partnership to collaborate on the development of scale-out applications and solutions with the CORD Blockchain framework,

As a Nordic-Indo IT Services company, Tarento has digitized business processes through low-code/no-code approaches, advanced NLP, Image and Speech solutions, and Deep Machine Learning.

Dhiway and Tarento have signed a Memorandum of Understanding (MoU) to collaboratively explore the design, development and production of various applications and services, including Dhiway’s credentialing platform MARK Studio and the trust infrastructure enabled by CORD Blockchain. The joint effort will leverage the modular trust infrastructure from Dhiway to provide verifiability of data with continuous assurance.

This partnership will transform how data pipelines are managed; data exchanges are set up and utilized in many use cases for improved business efficiency and growth.

Mohit Agarwal, Vice President – Digital at Tarento, stated, “This is a strategically important decision for us. We already have great work going on in govtech and startuptech spaces in AI engineering , product development and automotive/IoT spaces. With this partnership, we want to expand our open-source, open-standards technology services footprint to co-create meaningful blockchain rollouts for digital public infrastructure and country-scale impact.”

K P Pradeep, CEO at Dhiway, emphasized that “The CORD Blockchain framework has been designed to offer authentic data streams at scale using the composable trust infrastructure. This partnership enables significant opportunities for both companies to extend the impact of Web 3.0 technologies and especially distributed ledger technologies.” 


About Tarento Technologies Private Limited

Tarento is a Nordic-Indian technology services company with an unflinching desire to be the trusted global partner for startups, enterprises, government organizations and foundations/not-for-profits. We are proudly associated with architecting, building and operating some of India’s largest digital public infrastructure initiatives, government capacity building, edtech, and automotive digital platforms.


About Dhiway

Dhiway is a trust infrastructure company reshaping the digital future through population-scale technology solutions. We enable enterprises and government agencies to address key challenges around data stores, data exchange and data assurance through the CORD Blockchain – a Layer 1 enterprise blockchain technology.



The post Tarento and Dhiway collaborate to Foster Trust and Certainty in the Digital Landscape appeared first on Dhiway.


KuppingerCole

Trust in an AI Interconnected World

by Scott David Trust is an emotional state and belief held by human beings that is built upon a sense of reliability and predictability regarding future interactions.  The concept of trust is broadly applied to cover relationships among people, or between people and organizations.  The concept of a legal trust extends and formalizes the reliability of future interactions to create legal

by Scott David

Trust is an emotional state and belief held by human beings that is built upon a sense of reliability and predictability regarding future interactions.  The concept of trust is broadly applied to cover relationships among people, or between people and organizations.  The concept of a legal trust extends and formalizes the reliability of future interactions to create legally enforceable fiduciary obligations to elevate the subjective emotions and beliefs to become trustworthy, objective, reliable and actionable for future relationships. 

The concept of trust is not, however, usually applied to relationships AMONG organizations.  It seems naive to assert that one company (or any other purely legal person) trusts another.  At that point, the concept of reliability and predictability is more usually characterized as risk, rather than trust.   Organizations have developed myriad metrics for assessing risk across business, operating, legal, technical and social (BOLTS) domains as surrogate signals in the absence of human trust. 

Organizations do not have qualia, emotions, or beliefs, and therefore cannot be said to trust something.  However, as noted above, the reverse is not true, i.e., humans can trust organizations, and that is the source of brand loyalty (companies) and patriotism (countries), etc.

Trust is built on consistency of behaviors through time and space, and is encoded in signals associated with consistent behaviors.  With the advent of networked interaction and information systems (the Internet), the signals and behaviors upon which trust is built became mediated by multiple unseen layers, attenuating trust.  The caption of a well-known cartoon from the early Internet years: “On the Internet no one knows you’re a dog,” speaks to the challenges of trust in such unfamiliar, intangible domains.

With the advent of myriad systems and applications of so-called Artificial Intelligence, the signals, behaviors, and interactions online are rendered even more remote and unfamiliar, which further challenges human trust.  How can we trust interactions with AI (and mediated by AI) if we don’t even know what to expect of it?  The AI black box problem is not just confined to internal AI processing steps, it is also present in online interactions (and the information associated with such interactions) where AI is involved.  The advent of “agentic” AI systems, i.e., multi-step and AI P2P interactions, will create vast interaction complexities that will, in effect, enclose all online interactions in a black box rendering control to be stochastic at best, and illusory at worst.

In fact, the concept of trust with AI is a trap.  AI is a form of computational intelligence that processes human (mostly English) text purely computationally.  AI does not, at present, have any sense or understanding of the content or concepts that it is processing.  Putting aside the remarkable phenomenon that computational intelligence can produce outputs that reflect content and style familiar to humans, AI merely processes, it does not understand. 

In fact, AI’s ability to computationally derive such subtle, textured, and human-like patterns from text alone, supports the notion that a significant portion of human cognition (thinking) takes place in language (and material culture) itself, and not in the wet-ware organ of the brain.  The brain is just tuned to the mind that actually resides in language.  From this perspective, AI is a computational mind reader when it processes human text.  That possibility is at the same time creepy and beautiful, for it suggests a future hybridization of humans and AI systems into virtual chimera through a process that is most closely associated with symbiogenesis (from which eukaryotic mitochondria and chloroplasts are derived), but in an intangible form that might be called “sym-info-genesis.”

To put a finer point on this, human survival depended in part on overfitting perceptions of risk vectors from the environment.  Individual humans who perceived a lion hiding in the tall grass tended to live longer than humans that did not perceive the lion, even if the lion was not always there.  This quality of pareidolia (pattern detection in the environment) is responsible for humans perceiving AI output as presenting readable text and capturing author styles, etc. 

It is frequently said that AI hallucinates.  It is, in fact, the humans that are hallucinating AI outputs, not just the AI systems themselves.  In fact, human perception of meaning and content from AI system outputs is akin to predators (mis)perceiving that the eyespots on moth wings are the eyes of a much larger animal.  It is hallucination prompted by mimicry prompted by evolution. 

In the case of AI, the source of the mimicry is not evolution but computation AND human preference in selecting more useful forms of output.  Of course, when humans marvel at the efficacy and beauty of AI outputs, they are also creating a (virtual) fitness landscape, and selecting for those systems that are most evolved for survival in that (human interaction/information) landscape.  We were not around at the time that fish first crawled onto land, but humans are privileged to be able to watch the evolution of a new (non-physical) living form as AI evolves its way into the human trusted interaction landscape.

What are the implications for trust in the future internet where computational intelligence, i.e., AI, can mimic all sorts of content in ways that can benefit or harm interacting parties?  At cyberevolution in Frankfurt this December, we will explore the implications of these and related phenomenon, with the intention of understanding the dynamics at the crossroads of trust and risk.  We hope that you can join us.

Sunday, 21. July 2024

KuppingerCole

AI and Digital Trust: Ensuring Fairness and Transparency

In this episode of the KuppingerCole Analyst Chat, Matthias Reinwarth talks to Marina Iantorno, Research Analyst at KuppingerCole Analysts. They explore the concept of digital trust in our AI-driven, interconnected world. The discussion explores the definition and importance of digital trust, the current landscape of AI systems, and examples of successful and failed attempts to build trust. Marina

In this episode of the KuppingerCole Analyst Chat, Matthias Reinwarth talks to Marina Iantorno, Research Analyst at KuppingerCole Analysts. They explore the concept of digital trust in our AI-driven, interconnected world. The discussion explores the definition and importance of digital trust, the current landscape of AI systems, and examples of successful and failed attempts to build trust. Marina also breaks down key tenets crucial for fostering digital trust, including transparency, data privacy, security, accountability, and more. The episode provides actionable strategies for implementing these tenets and highlights tools and technologies that support digital trust.



Saturday, 20. July 2024

Safle Wallet

Introducing Safle LENS

Weekly Safle Update! 🚀 We’re excited to share the latest progress and milestones achieved with Safle this week. Here’s the update on our this week’s journey: 🚨 Exciting Release Alert: SafleID Documentation is Live! Our comprehensive SafleID Documentation has finally released. Delve into the detailed intricacies of SafleID and discover the exceptional value it offers. Explor
Weekly Safle Update! 🚀

We’re excited to share the latest progress and milestones achieved with Safle this week. Here’s the update on our this week’s journey:

🚨 Exciting Release Alert: SafleID Documentation is Live!

Our comprehensive SafleID Documentation has finally released. Delve into the detailed intricacies of SafleID and discover the exceptional value it offers.

Explore Now 👉

🔗SafleID Documentation

🛠️ Portfolio Website Overhaul

Introducing Safle LENS, our new portfolio viewer. Join us on this exciting journey and look forward to a sleek, user-friendly experience that’s truly exceptional.

Here’s a sneak peek at the design 👇

Stay tuned and keep your eyes on the stars—Safle LENS is coming soon! 🌟 Sign-Up Flow Development

The new sign-up flow is nearly complete and we’re gearing up for testing. Soon, onboarding will be smoother and faster for all new Saflenauts, Sentinels, and users.
Be among the first to try it out by signing up for our beta release program!

🔗Click here

🤝🏻 New Partnership with Coinshift!

We’re happy to announce our new partnership with Coinshift for Safle’s treasury management. Stay tuned as we will share more about it in the coming days.

🔗Follow us

💬 Join us on Discord

Join our Discord channel to connect with the team, share your thoughts, ask questions, and stay updated on all things Safle. We’re building in public, and our team is live every day on Discord, come say Hey at Safle Build in Public channel!

🔗Join us here

🚀 We are Hiring !!

Got any DevOps Ninjas, Kickass Growth Marketers, Detail-Oriented QA Experts, or Innovative Blockchain Engineers in your circle? Send them our way, and we’ll take it from there. Check out our openings and join Safle’s journey.

🔗Here

Thank you for being an integral part of our journey. Together, we’re reaching for the stars!

Stay stellar,
The Safle Team

Download the Safle App Now!

Experience the power of Safle at your fingertips 🚀

🔗SafleWallet

Friday, 19. July 2024

This week in identity

E56 - Emergency Episode Discussing the Global Crowdstrike Issue

Simon and David convene for a special episode to discuss the ongoing global IT outages caused by a Crowdstrike update. Note this was released Friday 19th July 9am PST / 5pm BST

Simon and David convene for a special episode to discuss the ongoing global IT outages caused by a Crowdstrike update. Note this was released Friday 19th July 9am PST / 5pm BST


Elliptic

Complying with MiCA’s stablecoin requirements using Ecosystem Monitoring

On June 30, new rules for stablecoin issuers came into effect across the European Union under the bloc’s Markets in Cryptoasset (MiCA) regulation. Stablecoin issuers must now obtain approval from EU member state supervisory authorities and meet a range of regulatory requirements prior to offering their tokens to consumers within the EU. 

On June 30, new rules for stablecoin issuers came into effect across the European Union under the bloc’s Markets in Cryptoasset (MiCA) regulation. Stablecoin issuers must now obtain approval from EU member state supervisory authorities and meet a range of regulatory requirements prior to offering their tokens to consumers within the EU. 


Tokeny Solutions

The Journey to Becoming the Leading Onchain Finance Operating System

The post The Journey to Becoming the Leading Onchain Finance Operating System appeared first on Tokeny.

Product Focus

The Journey to Becoming the Leading Onchain Finance Operating System

This content is taken from the monthly Product Focus newsletter in July 2024.

We are thrilled to share the exciting evolution of Tokeny. After years of continuous development and collaboration with asset owners and leading financial institutions across major financial hubs, our platforms and systems have significantly advanced. Today, we are proud to offer a robust and competitive DLT-based infrastructure that encompasses the entire spectrum of financial services.

As we reach this milestone, we are now focused on helping financial institutions transition to onchain finance. This means providing them with the necessary tools to conduct all their operations on a shared IT infrastructure powered by blockchain technologies.

A Complete Onchain Finance Operating System
Financial institutions have been investing significant resources for years to explore and understand the potential of blockchain technology and its application to their business use cases. Most of the time, they were not successful. They need a fully integrated onchain operating system, allowing them to quickly experiment and launch real-world applications in just a matter of days, with a proven ecosystem and use cases. Shaped by the demands of hundreds of real-life tokenization projects, our products provide exactly that.

Three Products for Businesses of All Kinds
In turn, we have developed and packaged our solutions into 3 layers to offer comprehensive solutions for businesses of all kinds:

T-REX Platform: The no-code, white-label solution that enables you to quickly launch your digital asset marketplace and compliantly manage tokenize assets. T-REX Engine: A set of onchain finance APIs that allow you to tailor and integrate the solutions for different business use into your existing systems and applications. T-REX Protocol: An advanced implementation of the open-source ERC-3643 token standard for ecosystem builders to build and enrich the ecosystem.

With these tools, you can tokenize any asset, on any EVM blockchain, tailor compliance setups to meet regulations in any jurisdiction, create custom workflows, manage tokens, serve investors, and authorize agents for corporate actions effortlessly.

Establishing An Incomparable Ecosystem
Another issue is that each stakeholder may use different service providers, such as custodian wallets. The role of the operating system is to ensure that regardless of stakeholders’ preferences, everything works seamlessly.

By partnering with over 200 service providers, we have formed an incomparable ecosystem to overcome these challenges. Our unique value proposition emerged as an onchain finance enabler for any type of business from large asset managers, fund administrators, distributors, and investment banks, to innovative entrepreneurs.

Thrive in the Onchain Era
Onchain finance represents a monumental shift in capital markets, where real-time operations and transactions have long lagged behind other sectors. Imagine a future where acquiring assets is as intuitive as shopping on Amazon, and transferring assets is as effortless as sending money via PayPal, even when you are a large and regulated financial institution.

Asset management will never be the same again. Onchain assets become smart and easy to manage. Investors receive instant, interactive, and personalized services. This marks the dawn of a truly modern era for finance. Equipped with cutting-edge tools, know-how expertise, and a complete ecosystem we are here to propel you to the forefront of onchain finance.

Xavi Aznal Head of Product Subscribe Newsletter

This monthly Product Focus newsletter is designed to give you insider knowledge about the development of our products. Fill out the form below to subscribe to the newsletter.

Other Product Focus Blogs 56% of Fortune 500 Are Onchain: APIs Are Your Key to Staying Ahead 23 August 2024 The Journey to Becoming the Leading Onchain Finance Operating System 19 July 2024 Streamline On-chain Compliance: Configure and Customize Anytime 3 June 2024 Multi-Chain Tokenization Made Simple 3 May 2024 Introducing Leandexer: Simplifying Blockchain Data Interaction 3 April 2024 Breaking Down Barriers: Integrated Wallets for Tokenized Securities 1 March 2024 Tokeny’s 2024 Products: Building the Distribution Rails of the Tokenized Economy 2 February 2024 ERC-3643 Validated As The De Facto Standard For Enterprise-Ready Tokenization 29 December 2023 Introducing Multi-Party Approval for On-chain Agreements 5 December 2023 The Unified Investor App is Coming… 31 October 2023 Tokenize securities with us

Our experts with decades of experience across capital markets will help you to digitize assets on the decentralized infrastructure. 

Contact us

The post The Journey to Becoming the Leading Onchain Finance Operating System first appeared on Tokeny.

The post The Journey to Becoming the Leading Onchain Finance Operating System appeared first on Tokeny.


PingTalk

Transformative Approaches to Reduce Identity Fraud in Banking

New technologies and architectures from Ping Identity are now available to counter traditional fraud vectors and frustrate generative and adversarial AI.

Banking fraud becomes costlier each year, and the threat of generative and adversarial AI technologies being misused adds additional approaches and sophistication of attack vectors never experienced before. Identity fraud is a major contributor to the rise in overall bank fraud, driven by many factors including an explosion in identity theft, with experts believing there is a new victim of identity theft every 22 seconds, and total fraud and identity theft cases up 47% from the previous year to $10.2 billion according to the Federal Trade Commission (FTC). Meanwhile, the Financial Crimes Enforcement Network (FinCEN) has released a Financial Trend Analysis in January 2024 that reveals approximately 1.6 million, or 42% of around 3.8 million total Bank Secrecy Act (BSA) reports, equivalent to $212 billion in suspicious activity, were related to identity.

 

These government agencies are sounding the alarm because banks and other financial institutions are increasingly challenged by sophisticated, motivated cybercriminals who are constantly finding new and creative ways to commit fraud. At the same time, customer demands mean that financial institutions are under significant pressure to provide Open Banking APIs and other new federated connections with business partners, despite the fact that this significantly increases their attack surface. 

 

Fortunately, new technologies and architectures are now available that can help banks counter the traditional attacks and future-proof against new and enhanced AI-based attacks.

 

Compromised Identity Is Central to Banking Fraud

Identity crimes often precede the many types of fraud common in banking. Whether fraudsters are aiming to open new accounts or apply for loans or new credit cards under a stolen or synthetic identity, or are seeking to gain access to existing accounts in order to make fraudulent transfers or harvest sensitive information, they must commit identity fraud first.

 

It is unsurprising, then, that the cost of identity fraud in banking as well as the volume of fraud cases related to identity continues to go up. Andrea Gacki, Director of FinCEN revealed in June 2024 some preliminary results of an early assessment of the Suspicious Activity Reports (SARs) from 2022 and 2023. Director Gacki revealed that in just two years, the percentage of the 4.7 million reported SARs tied to some impersonation, circumvention or compromise of identity has jumped from 42% (2021 assessment) to 75%. Director Gacki said, “Based on initial indications, by 2023, identity-related SARs accounted for around half of value and almost three quarters of volume.”

 

AI Has Created New Threat Vectors

The development in artificial intelligence technologies has been a book to fraudsters, who can now use generative AI to commit fraud more effectively and at scale. As just one example of how this might play out, many European banks and regulators have instituted remote video interviews as a requirement to opening a bank account. However, what our eyes see and our ears hear can no longer be relied upon thanks to generative and other AI technologies being exploited by adversaries. Rapid implementation and usage tools now available as layers on top of the AI core tech enables video and audio deep fakes to be created and injected into a digital interaction with little effort.

 

Fraud departments already struggle to keep up with the number of cases that need their attention, and AI is likely to make this problem much worse. Ping recently surveyed 700 IT decision makers from around the world about the topics of AI, fraud, and decentralized identity, and found that only 52% of respondents felt fully confident that they could detect a deepfake of their CEO. Meanwhile, AI emerged as the top area of significant concern among the professionals surveyed, and 54% of organizations admitted to being extremely concerned that AI technology would increase identity fraud.

  Digital and Open Banking Increases Attack Surface

Digital and online banking continues to increase at a rapid pace with customer demand to execute routine financial transactions driving adoption. 81% of users in the US surveyed say they have linked their bank account to third parties online. Regulation from governing authorities demanding Open Banking so as to not lock customers into one bank and enabling them to move between banks has added additional pressure. 

 

Enabling access using traditional methods like server-side APIs and federation (such as OIDC) does not lend itself to increasing security. Every time account access APIs are published for consumption by third parties or federated integrations are created between the bank and a third party, the attack surface of the bank increases, making it more vulnerable and statistically more likely to experience an attack that must be mitigated. As sophistication increases with generative and adversarial AI, securing these connections and mitigating attacks will become increasingly expensive with a higher probability of failure to mitigate.

 

New Technologies and Architectures Open Up New Protective Fronts to Fight Fraud

Fortunately, new technologies and architectures are now available that can help banks counter the traditional attacks and future-proof for the fast-approaching AI-based attacks. One such solution is the PingOne Neo product suite, which includes identity verification with liveness and data injection detection (deep fake protection), verifiable credentials, and decentralized identity and integration.

 

To see how these technologies can help, let’s examine some of the functional areas requiring protection in banking and how these new technologies can help.

Thursday, 18. July 2024

Extrimian

Essential Workflows in an SSI Ecosystem

A Focus on the Trust Triangle for Digital Identity In a previous article, Functional Analysis for Implementing Self-Sovereign Identity (SSI) in Your Business, we discussed how decentralized identity is transforming digital identity management. It offers users complete control over their personal data and emphasizes the importance of a detailed functional analysis for successful implementation. In
A Focus on the Trust Triangle for Digital Identity

In a previous article, Functional Analysis for Implementing Self-Sovereign Identity (SSI) in Your Business, we discussed how decentralized identity is transforming digital identity management. It offers users complete control over their personal data and emphasizes the importance of a detailed functional analysis for successful implementation. In this article, we will focus on the main use cases within an SSI ecosystem, exploring the interactions and roles between the key players.

What is the SSI Trust Triangle?

The trust triangle is a fundamental concept describing the trust relationship among three actors in the SSI ecosystem: the issuer, the holder, and the verifier. This trust triangle ensures that digital credentials are issued, managed, and verified securely and reliably.

Trust Triangle decentralized identity and reputation_web3_Extrimian and QuarkID Roles and Responsibilities: Issuer: The entity that issues digital credentials based on certain attributes or information of the holder and digitally signs them. Examples include universities issuing digital diplomas, governments issuing digital IDs, or companies issuing employment certificates. Holder: The person or entity that receives and possesses the digital credential. The holder stores these credentials in their identity wallet and presents them when needed. They have full control over who can view and verify their credentials, ensuring privacy and control over their identity. Verifier: The entity that verifies the authenticity and validity of the digital credential presented by the holder. They ensure that the credential was issued by a trusted issuer and that the information contained in the credential is valid. Examples include employers verifying employment certificates, airlines verifying digital passports, or financial institutions verifying customer information. Credential Issuance

Credential issuance involves the issuer and the holder and can be initiated in two ways:

1. User-Initiated Request

The holder initiates the action by requesting the issuer to generate a credential. This process can be done through an application provided by the issuer. Once the request is approved, the credential is sent to the user’s identity wallet.

2. Automatic Issuance

Automatic issuance occurs without an explicit request from the user. It happens when a specific action within a system triggers the issuance of a credential, which is then sent automatically to the holder’s identity wallet without requiring additional confirmation.

Credential Reception

The holder receives credentials either through mobile applications or web applications.

Mobile Identity Wallet

If the holder initiates the credential issuance request, the issuer’s site or application generates a credential embedded in a QR code or a deeplink.

QR Code: The holder scans the QR code with their phone’s camera or the wallet’s integrated camera. This initiates the Wallet and Credential Interactions (WACI) flow, involving a message exchange with the SSI backend. The user accepts and saves the generated credential in their wallet. Deeplink: The holder accesses the deeplink received from the issuer, automatically initiating the WACI flow. The user accepts and saves the generated credential in their wallet. Web Identity Wallet

For web wallets, the reception can be automatic. The generated credential appears directly in the wallet without needing user confirmation. This process also involves the WACI protocol.

Credential Presentation and Verification

Credential presentation involves both the holder and the verifier. The verifier can be a web or mobile application, adapting to the user’s and verification context’s needs.

Presentation Methods: 1. QR Code Scan

The verifier presents a QR code that the user scans with their device. The user selects the credential they wish to present and can choose to use Selective Disclosure, showing only the necessary data from the credential or presenting the entire credential.

2. Automatic Presentation

The user can select the credential they wish to present from their web wallet and choose the verifier to whom they wish to present it.

Verifier Validations

The verifier validates the credential, ensuring it is current, valid, and issued by an authorized issuer. After validating the credential, specific business rules of the verifier are applied. The validation results are shown to both the user and the verifier.

Conclusion

These use cases highlight the flexibility and control that decentralized identity offers, allowing users to manage their credentials securely and efficiently. Understanding these flows and the information exchange between SSI ecosystem actors is crucial to appreciating the benefits and innovation brought by this system.

For more information on Self-Sovereign Identity (SSI), use cases, applications, and industries that can implement decentralized identity systems, visit:

Self-Sovereign Identity (SSI) Use Cases Decentralized Identity Foundation

The post Essential Workflows in an SSI Ecosystem first appeared on Extrimian.


liminal (was OWI)

Balancing UX and Security in Customer Authentication

In the rapidly transforming digital landscape, ensuring secure and seamless customer authentication has become a critical priority for businesses across various sectors. Customer authentication is essential for optimizing user experience (UX) and security. Companies striving to implement robust customer authentication encounter significant challenges that can impact their bottom line. Outdated acc
In the rapidly transforming digital landscape, ensuring secure and seamless customer authentication has become a critical priority for businesses across various sectors. Customer authentication is essential for optimizing user experience (UX) and security. Companies striving to implement robust customer authentication encounter significant challenges that can impact their bottom line. Outdated account recovery methods, persistent reliance on passwords, and the growing threat of phishing and fraud present considerable obstacles. However, integrating advanced customer authentication solutions offers promising avenues for mitigating these issues and achieving substantial cost savings. The Importance of Customer Authentication

Customer authentication solutions are integral to regulating user access to online applications, digital resources, and transaction flows. Historically dependent on passwords and knowledge-based methods, contemporary solutions now employ various passive and active authentication techniques to confirm identities and secure login attempts. Businesses invest in these solutions to protect against account takeover attacks and to ensure that only authorized users can access their digital platforms. Furthermore, regulatory influences like the second Payment Services Directive for online transactions drive the adoption of these advanced authentication methods.

Challenges in Implementing Customer Authentication

Despite the availability of sophisticated authentication technology, businesses continue to rely on legacy solutions. Though familiar to consumers, these solutions create challenges in implementation and efficacy. Traditional account recovery methods remain costly, friction-filled, and vulnerable to phishing attacks. According to recent surveys, 59% of authentication practitioners are dissatisfied with their current account recovery capabilities, which heavily rely on passwords. This dissatisfaction leads to higher operational costs, increased call center volumes, and elevated fraud risks. Additionally, educational gaps and legacy systems hinder the widespread adoption of passwordless solutions, with 41% of businesses acknowledging these barriers.

Another significant issue is balancing user experience and security in authentication flows. While 49% of businesses prioritize enabling convenient user experiences, 51% place greater emphasis on preventing unauthorized access. This delicate balance often results in trade-offs that frustrate users and compromise security.

The complexity of the customer authentication landscape further complicates the situation. The market is crowded with over 50 companies offering various solutions, from global tech giants like Google and Microsoft to specialized vendors like 1Kosmos and Curity. Each vendor presents unique capabilities and approaches, making it challenging for businesses to select the most suitable solution. For instance, while some companies provide end-to-end authentication platforms, others focus on niche areas like passkeys, OTPs, or biometrics. 

Moreover, manual risk decisioning, idiosyncratic authorization methods, and unprotected one-time passwords (OTPs) prevent current authentication solutions from realizing their full potential. Although 93% of practitioners seek AI-based adaptive or continuous authentication capabilities, few vendors leverage AI and machine learning to produce real-time automated recommendations based on context and risk levels. Similarly, without standard protocols and frameworks, deploying customized access control policies is complex, limiting the effectiveness of fine-grain authorization capabilities.

The Cost of Inadequate Customer Authentication

Businesses also struggle with the high costs associated with not successfully authenticating customers. The average cost per successful phishing attack is $5,285, and the cost per successful telephone fraud attack is $792. Additionally, 29% of call center volumes relate to account recovery, contributing to significant operational expenses. These factors underscore the need for more robust and efficient authentication solutions.

Advanced Customer Authentication: A Path Forward

Leading customer authentication solutions offer a path forward by addressing these challenges and providing substantial benefits. By adopting advanced authentication methods like FIDO2 passkeys, standardized protocols such as OAuth 2.0 and OpenID Connect, and expanding native capabilities, businesses can enhance security and user experience.

FIDO2 Passkeys: Emerging as a phishing-resistant replacement for passwords and OTPs, FIDO2 Passkeys address account recovery challenges by ensuring the authentication process remains entirely on the user’s device. This hardware-based authentication method uses strong cryptography and biometrics, reducing the need for easily compromised static credentials.

Standardized Protocols: Standardizing authentication and authorization protocols through OAuth 2.0 and OpenID Connect promotes interoperability among customer authentication solutions. OAuth 2.0 provides secure delegated access, allowing third-party services to request resources on behalf of users without exposing credentials. OpenID Connect adds an authentication layer to verify user identity, ensuring secure and compatible integrations between service providers, identity providers, and authentication providers.

Expanding Native Capabilities: Vendors are broadening their capabilities beyond managing daily customer access by integrating fraud detection, prevention, and identity verification into their platforms. Emerging orchestration capabilities enable businesses to customize authentication flows with pre-integrated partners, starting with basic MFA and adding third-party authenticators as needed to optimize user flows.

Cost Savings and Efficiency: Businesses adopting leading customer authentication solutions can achieve significant cost savings. For instance, prevention of account resets directed to call centers can reduce call center volumes by 38%. Intuitive passwordless authentication and self-service account recovery can decrease call center labor requirements by 60%. Additionally, these solutions can reduce customer churn by 13% and successful phishing attacks by 12%.

Integrating advanced customer authentication solutions enhances security and user experience and drives substantial cost savings and operational efficiencies. As businesses navigate the complexities of the authentication landscape, embracing these innovative solutions will be crucial for staying ahead of evolving threats and maintaining competitive advantage.

Related Content:

Customer Authentication Market and Buyer’s Guide Link Index for Customer Authentication Account Takeover Market and Buyer’s Guide

The post Balancing UX and Security in Customer Authentication appeared first on Liminal.co.


Spruce Systems

Meet the SpruceID Team: Scotty Matthewman

Get to know Scotty Matthewman, Senior Designer here at SpruceID.

Name: Scotty Matthewman
Team: Design
Based in: New York City About Scotty

Applying my industrial design background to tech products, I have worked in the startup and agency worlds and, most recently, in corporate innovation. After finding my way into Blockchain tech and innovative tech in general, decentralized and digital identity became a very interesting area to explore.

I was excited to join SpruceID because I’d have the opportunity to design things that have never been designed before, and both the company and industry's values align with the impact I’d like to have: improving accessibility and inclusion through experience optimization.

Can you tell us about your role at SpruceID?

I am the Senior Designer on the team, so I work on product design, website design, brand and marketing materials, demo prototypes and videos, and communication artifacts. In addition to design work, my role involves product strategy discussions and user research.

What advice would you give to a designer who is early in their career?

I would say that being genuinely curious is necessary. You have to seek out how things work or why things are the way they are if you want to make educated decisions in your designs; otherwise, you’re just shooting in the dark.

I think the life of a professional designer is a lot different than that of a design student. Nothing is as simple as it is when you receive an assignment, so you will always have to go out and proactively understand the constraints, capabilities, business value, etc., of anything you’re working on. 

Additionally, it’s way easier to spend a lot of time on something when it’s interesting to you, so find what’s interesting, and you’re more likely to find a great path. In my opinion, every other skill in design stems from curiosity and wanting to learn.

What are you currently learning, or what do you hope to learn?

The obvious answer is all the tech behind our product. I’m not an engineer, but understanding the moving parts, at least at a high level, helps me make more educated decisions in the designs.

What has been the most memorable moment for you at SpruceID so far?

Our team offsite in Rio de Janeiro, Brazil was an awesome experience. I had just recently joined SpruceID, and because we had the majority of the team in one place, I got to know many people early on. I think in-person experiences like this can help build rapport and chemistry in a team, which can enable them to work quickly and effectively.

How do you define success in your role, and how do you measure it?

Success in my role is taking abstract problems and delivering the most intuitive, effective solutions.

These may not be official measurements, but it’s a big win when someone says, “Oh! Yeah, that would work!” because you bring a new, creative way of designing something.

You feel like you solved the puzzle. It’s very gratifying and promising when your chosen solution feels like the obvious choice.

What is your favorite part about working at SpruceID?

I have really enjoyed being on a team made up of very smart people who are all trying to make ambitious strides. It is clear how great everyone is at what they do, so I am humbled to be included in this group.

Fun Facts

What do you enjoy doing in your free time?: I like to play basketball, walk through NYC, spend time with friends, go to comedy shows, surf when I can, and take singing classes, which have been such a fun part of living in New York.

If you could be any tree, what tree would you be and why?: I would be a maple tree. I used to make furniture, and I always loved using maple. The color and grain are so pretty and clean, and it gives very clean, Nordic and Japanese vibes

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.

Want to work with us? Check out our open roles here.


Tokeny Solutions

MOCHAX and Tokeny Partner to Provide Unprecedented Values to Equity Investment through RWA Tokenization

The post MOCHAX and Tokeny Partner to Provide Unprecedented Values to Equity Investment through RWA Tokenization appeared first on Tokeny.

Luxembourg, 18th July – MOCHAX, a leading digital asset management firm, is launching a Real World Asset (RWA) tokenization project in collaboration with Tokeny, the pioneering onchain finance operating system for tokenized securities. This initiative tokenizes the equity of the firm, bringing new levels of liquidity, accessibility, and efficiency to the equity market.

According to Pitchbook, private equity has historically outperformed public markets in return comparisons over periods ranging from 5 to 20 years. However, the traditional equity market is inaccessible and illiquid as firms generally require a large minimum investment, high transaction costs, and a long lock-up period.

RWA tokenization, the process of representing an asset on the blockchain, provides a solution to solve these challenges. MOCHAX leverages Tokeny’s white-label T-REX Platform to tokenize its equity, offering an e-commerce-like digital experience for investors. This approach simplifies the investment process and replaces traditional manual methods. By enabling 24/7 peer-to-peer automated transactions among qualified investors, MOCHAX is reshaping the equity market, making previously impossible features accessible to equity investors and increasing liquidity.

An EY survey shows that high-net-worth investors and institutional investors rank tokenized equity as the top choice among tokenized alternative assets, due to its increased liquidity, lower transaction costs, improved performance, and enhanced transparency. Since 2021, the team at MOCHAX has been at the forefront of blockchain and digital asset investments, achieving a 44X return on invested capital. To meet this demand, MOCHAX is positioning itself as the go-to platform for tokenized equity with the launch of its security token offering.

Teaming up with Tokeny unlocks a new era for us, harnessing their unmatched technical prowess to revolutionize our onchain equity capabilities. The integration of the ERC-3643 standard for RWA tokenization guarantees seamless interoperability across the entire ecosystem, eliminating the inefficiencies of isolated systems. This strategic move empowers us to stay agile, scalable, and primed for innovation, ready to seize future market opportunities with confidence. Gregory GriffithsGeneral Partner at MOCHAX Onchain finance is revolutionizing value exchange with real-time transactions, automated compliance, and seamless interoperability. Private equity will benefit immensely, transforming slow, cumbersome processes into efficient onchain operations. Early adopters like MOCHAX will gain a competitive edge. We're proud to partner with them to deliver unparalleled user experiences and drive industry change. Luc FalempinCEO Tokeny About MOCHAX

MOCHAX represents a pioneering endeavor in the realm of venture capital investment, introducing a novel approach through tokenization of real-world assets like equities. As a security token offering (STO), MOCHAX aims to democratize access to venture capital opportunities by leveraging blockchain technology to tokenize traditional venture capital assets. By transforming startup equity and tokens into tradable digital assets, MOCHAX enables investors to participate in venture capital investments with increased liquidity, transparency, and accessibility.

www.mochax.xyz | info@mochax.xyz

About Tokeny

Tokeny is a leading onchain finance operating system. Tokeny has pioneered compliant tokenization with the open-source ERC-3643 standard and advanced white-label software solutions. The enterprise-grade platform and APIs unify fragmented onchain and offchain workflows, integrating essential services to eliminate silos. It enables seamless issuance, transfer, and management of tokenized securities. By automating operations, offering innovative onchain services, and connecting with any desired distributors, Tokeny helps financial actors attract more clients and improve liquidity. Trusted globally, Tokeny has successfully executed over 120 use cases across five continents and facilitated 3 billion onchain transactions and operations.

The post MOCHAX and Tokeny Partner to Provide Unprecedented Values to Equity Investment through RWA Tokenization first appeared on Tokeny.

The post MOCHAX and Tokeny Partner to Provide Unprecedented Values to Equity Investment through RWA Tokenization appeared first on Tokeny.


Elliptic

$235 million lost by WazirX in North Korea-linked breach

Earlier today, Indian exchange WazirX suffered a major hack and a resulting loss of funds due to a suspected hack: 

Earlier today, Indian exchange WazirX suffered a major hack and a resulting loss of funds due to a suspected hack: 

Wednesday, 17. July 2024

HYPR

Identity Evolved: The Rise of Multi-Factor Verification

Identity verification has traditionally played an important but limited role in the world of identity and access management (IAM). To establish someone’s identity, you need to prove that they are who they say they are, linking  their digital identity to their real-world identity. For employees, this verification typically occurs during onboarding; for customers, it happens when the

Identity verification has traditionally played an important but limited role in the world of identity and access management (IAM). To establish someone’s identity, you need to prove that they are who they say they are, linking  their digital identity to their real-world identity. For employees, this verification typically occurs during onboarding; for customers, it happens when they open a new account. Once validated, they receive credentials, are granted appropriate authorizations, and enter the vast identity access flow universe — with identity verification rarely called upon again.

This system is fundamentally flawed.

Help desk social engineering, synthetic identities and AI-powered attacks are exploiting inadequate identity verification systems to completely circumvent IAM security. The $100 million attack on MGM resorts occurred when attackers impersonated an employee, convinced the IT help desk to reset credentials, and then escalated privileges until gaining control of the entire system. Just a few months later, a finance worker at a multinational firm was tricked into wiring out $25 million when cybercriminals posed as senior executives using video and audio deepfakes. In fact, 78% of organizations were targeted by identity-related attacks last year.

Unmasking Social Deception

The industry urgently needs to evolve its approach to combat these modern threats. Multi-factor verification (MFV) offers the answer. A recent article by Susan Morrow makes the case  eloquently — I highly recommend the read. Multi-factor verification moves beyond relying on authentication as the primary gatekeeper, making identity verification that uses multiple verification factors and risk assessment, an intrinsic part of daily access flows.

This transformation is the next step in identity security maturation, similar to authentication’s progression from passwords, to multi-factor authentication, to phishing-resistant MFA and passkeys. Authentication had to adapt to combat escalating phishing and password-related attacks. Multi-factor verification is essential to stem the onslaught of sophisticated social engineering threats.

Fake Passport Used to Bypass Crypto Exchange IDV System. Image Source: 404 Media

The Current State of Authentication vs. Verification

To understand what makes multi-factor verification such a powerful tool, it’s helpful to go back to IAM basics.

What Is Authentication?

In the digital world, authentication is the process of confirming the identity of a user before allowing them to access a device or account. Note that I say user, not person, because that’s what they are in this process — a user in the system. Common authentication factors are something the user knows (like a password), something the user owns (like a mobile phone or hardware security key) or something the user is (biometric data like a fingerprint). Multi-factor authentication (MFA) requires two or more factors from different categories to confirm identity.

What Is Identity Verification?

Also referred to as identity proofing, identity verification makes sure a person is who they claim to be, and that the identity is genuine. Verification can be done in person or digitally, using various methods, depending on the level of identity assurance required. Methods include location checks, comparing user-supplied person information against official databases, examining government issue documents, matching a selfie against an official ID and personal interactions, among others.

Authentication vs. Verification

In a nutshell, verification involves establishing a legitimate, proven user identity in a system. Authentication is about keeping unauthorized users out of the system.

What Is Multi-Factor Verification (MFV)?

Today, access to an organization’s systems and resources is primarily controlled by the authentication process. Yes, there are variations and layers — adaptive authentication, risk-based authentication, access controls like RBAC and PAM — but essentially the act of providing the right combination of credentials gets you through the door. Multi-factor verification (MFV) brings deeper identity verification checks and risk assessment into this daily access process.

Multi-factor verification integrates multiple verification factors dynamically and contextually throughout the user session. This approach combines continuous verification with authentication mechanisms so that you are not just validating the user, you are validating the human.

Multi-factor authentication vs. multi-factor verification

How MFV Works

Today, comprehensive identity verification checks are generally performed only at specific points in time, such as when opening a new account or beginning a job. At other critical moments, such as resetting a credential or registering a new phone, most organizations rely on knowledge-based answers or calling the helpdesk, which are notoriously vulnerable to social engineering.

Anatomy of the Help Desk Social Engineering Attack on MGM Resorts

By contrast, MFV continuously verifies the person's identity based on a combination of factors such as behavior, context, and biometrics. This dynamic verification adapts in response to behavior anomalies, device telemetry, environment and other risk signals, making it more difficult for attackers to exploit. By integrating these factors in real-time, MFV offers a secure, fast and less intrusive verification process.

Benefits of Multi-Factor Verification

Nearly 4 in 10 organizations name identity verification as a top identity security challenge. MFV addresses their pain points on multiple fronts.

Stop Social Engineering and other Identity Threats:  Last year saw a 71% increase in attacks abusing valid accounts. MFV's continuous verification significantly reduces the risk of ATO, session hijacking and other attacks. By continuously adapting to the user's behavior and context, multi-factor verification provides greater resistance to sophisticated threats, ensuring that only legitimate users can maintain access.

Improved User Experience: Most organizations struggle with real-time verification, spending more than two hours verifying identity during employee onboarding, when replacing a device, recovering an account or during other high-risk scenarios. MFV provides a seamless and less intrusive verification process, with basic checks conducted behind the scenes. Additional forms of proof are only required at times of increased risk, creating a smoother and more personalized experience.

Scalability and Flexibility: MFV is easily adaptable to different industries and use cases. Its flexibility allows integration with existing identity stacks, making it a scalable solution for organizations of all sizes.

How HYPR Uses Multi-Factor Verification

Multi-factor verification is core to HYPR’s Identity Assurance Platform. The HYPR Platform unifies  phishing-resistant passwordless authentication, adaptive risk mitigation and automated identity verification into a seamless, user-centric access flow. Organizations can easily choose and configure the identity verification processes that suit their environment and use cases. For example, secure self-service options at times of low risk, with additional steps such as live video chat in higher risk scenarios or when security anomalies are detected. They can also enforce a range of phishing-resistant authenticators including device-bound Enterprise Passkeys, hardware keys and smart cards.

Example Multi-Factor Verification Flow With HYPR

Toward Identity-Centric Security With MFV

Organizations worldwide have an identity problem. The vast majority of breaches today are related to identity issue. As Gartner’s Cybersecurity Chief of Research, Mary Ruddy, pointed out, “Digital security is reliant on identity whether we want it to be or not. In a world where users can be anywhere and applications are increasingly distributed across datacenters in the multi-cloud, identity IS the control plane.

Current access processes are no match against attackers’ nimble and incessant tactics. Initiatives like FIDO’s recently announced identity verification certification program bring critical advancement, but are just part of the answer. Multi-Factor Verification (MFV) marks a major leap forward in identity security, offering stronger protection and a better user experience. As organizations plan to build a more identity-centric security approach, it’s imperative they include MFV in their identity security protocols. Emerging technologies like decentralized identity systems hold promise for even more secure and efficient verification methods. Continuous innovation will drive MFV’s evolution, ensuring it remains a strong defense against emerging threats.


1Kosmos BlockID

Blockchain Identity Management: A Complete Guide

Introduction Traditional identity verification methods show their age, often proving susceptible to data breaches and inefficiencies. Blockchain emerges as a beacon of hope in this scenario, heralding a new era of enhanced data security, transparency, and user-centric control to manage digital identities. This article delves deep into blockchain’s transformative potential in identity verification,
Introduction

Traditional identity verification methods show their age, often proving susceptible to data breaches and inefficiencies. Blockchain emerges as a beacon of hope in this scenario, heralding a new era of enhanced data security, transparency, and user-centric control to manage digital identities. This article delves deep into blockchain’s transformative potential in identity verification, highlighting its advantages and the challenges it adeptly addresses.

What is Blockchain?

Blockchain technology represents the decentralized storage of a digital ledger of transactions. Distributed across a network of computers, decentralized storage of this ledger ensures that every transaction gets recorded in multiple places. The decentralized nature of blockchain technology ensures that no single entity controls the entire blockchain, and all transactions are transparent to every user.


Types of Blockchains: Public vs. Private

Blockchain technology can be categorized into two primary types: public and private. Public blockchains are open networks where anyone can participate and view transactions. This transparency ensures security and trust but can raise privacy concerns. In contrast, private blockchains are controlled by specific organizations or consortia and restrict access to approved members only. This restricted access offers enhanced privacy and control, making private blockchains suitable for businesses that require confidentiality and secure data management.

Brief history and definition

The concept of a distributed ledger technology, a blockchain, was first introduced in 2008 by an anonymous entity known as Satoshi Nakamoto. Initially, it was the underlying technology for the cryptocurrency Bitcoin. The primary goal was to create a decentralized currency, independent of retaining control of any central authority, that could be transferred electronically in a secure, verifiable, and immutable way. Over time, the potential applications of blockchain have expanded far beyond cryptocurrency. Today, it is the backbone for various applications, from supply chain and blockchain identity management solutions to voting systems.

Core principles

Blockchain operates on a few core principles. Firstly, it’s decentralized, meaning no single entity or organization controls the entire chain. Instead, multiple participants (nodes) hold copies of the whole blockchain. Secondly, transactions are transparent. Every transaction is visible to anyone who has access to the system. Lastly, once data is recorded on a blockchain, it becomes immutable. This means that it cannot be altered without altering all subsequent blocks, which requires the consensus of most of the blockchain network.

The Need for Improved Identity Verification

Identity verification is a cornerstone for many online processes, from banking to online shopping. However, traditional methods of identity verification could be more challenging. They often rely on centralized databases of sensitive information, making them vulnerable to data breaches. Moreover, these methods prove identity and often require users to share personal details repeatedly, increasing the risk of data theft or misuse.

Current challenges in digital identity

Digital credentials and identity systems today face multiple challenges. Centralized systems are prime targets for hackers. A single breach can expose the personal data of millions of users. Additionally, users often need to manage multiple usernames and passwords across various platforms, leading to password fatigue and increased vulnerability. There’s also the issue of privacy. Centralized digital identities and credentials systems often share user data with third parties, sometimes without the user’s explicit consent.


Cost of identity theft and fraud

The implications of identity theft and fraud are vast. It can lead to financial loss, credit damage, and a long recovery process for individuals. For businesses, a breach of sensitive information can result in significant financial losses, damage to business risks, reputation, and loss of customer trust. According to reports, the annual cost of identity theft and fraud runs into billions of dollars globally, affecting individuals and corporations.

How Blockchain Addresses Identity Verification

 

Blockchain offers a fresh approach to identity verification. By using digital signatures and leveraging its decentralized, transparent, and immutable nature, blockchain technology can provide a more secure and efficient way to verify identity without traditional methods’ pitfalls.

Decentralized Identity

Decentralized identity systems on the blockchain give users complete control over their identity data. Users can provide proof of their identity directly from a blockchain instead of relying on a central authority to keep medical records and verify identity. This reduces the risk of a centralized data breach and gives users autonomy over their identities and personal data.

Transparency and Trust

Blockchain technology fosters trust through transparency, but the scope of this transparency varies significantly between public and private blockchains. Public blockchains allow an unparalleled level of openness, where every transaction is visible to all, promoting trust through verifiable openness. On the other hand, private blockchains offer a selective transparency that is accessible only to its participants. This feature maintains trust among authorized users and ensures that sensitive information remains protected from the public eye, aligning with privacy and corporate security requirements.

Immutability

Once identity data is recorded on a blockchain, it cannot be altered without consensus. This immutability of sensitive, personally identifiable information ensures that identity data remains consistent and trustworthy. It also prevents malicious actors from changing identity data for fraudulent purposes.

Benefits of Blockchain Identity Verification

 

Blockchain’s unique attributes offer a transformative approach to identity verification, addressing many of the challenges faced by the traditional identity systems’ instant verification methods.

Enhanced Security

Traditional identity verification systems, being centralized, are vulnerable to single points of failure. If a hacker gains access, the entire system can be compromised. Blockchain, with its decentralized nature, eliminates this single point of failure. Each transaction is encrypted and linked to the previous one. This cryptographic linkage ensures that even if one block is tampered with, it would be immediately evident, making unauthorized alterations nearly impossible.

User Control

Centralized identity systems often store user data in silos, giving organizations control over individual data. Blockchain shifts this control back to users. With decentralized identity solutions, individuals can choose when, how, and with whom they share their personal information. This not only enhances data security and privacy but also reduces the risk of data being mishandled or misused by third parties.

Reduced Costs

Identity verification, especially in sectors like finance, can be costly. Manual verification processes, paperwork, and the infrastructure needed to support centralized databases contribute to these costs. Blockchain can automate many of these processes using smart contracts, reducing the need for intermediaries and manual interventions and leading to significant cost savings.

Interoperability

In today’s digital landscape, individuals often have their digital identities and personal data scattered across various platforms, each with its verification process. Blockchain can create a unified, interoperable system where one’s digital identity documents can be used across multiple platforms once verified on one platform. This not only enhances user convenience but also streamlines processes for businesses.

The Mechanics Behind Blockchain Identity Verification

Understanding its underlying mechanics is crucial to appreciating the benefits of the entire blockchain network’s ability for identity verification.

How cryptographic hashing works

Cryptographic hashing is at the heart of the blockchain network’s various security measures. When a transaction occurs, it’s converted into a fixed-size string of numbers and letters using a hash function. This unique hash is nearly impossible to reverse-engineer. When a new block is created, it contains the previous block’s hash, creating a blockchain. Any alteration in a block changes its hash, breaking the chain and alerting the system to potential tampering.

Public and private keys in identity verification

Blockchain uses a combination of public and private keys to ensure secure transactions. A public key is a user’s address on the blockchain, while a private key is secret information that allows them to initiate trades. Only individuals with the correct private key can access and share their data for identity verification, ensuring their data integrity and security.

The role of consensus algorithms

Consensus algorithms are protocols that consider a transaction valid based on the agreement of the majority of participants in the network. They play a crucial role in maintaining the trustworthiness of the blockchain. In identity verification, consensus algorithms ensure that once a user’s identity data is added to the blockchain, it’s accepted and recognized by the majority, ensuring data accuracy and trustworthiness.

Conclusion

Through its unique attributes, blockchain presents a compelling and transformative alternative to the pitfalls of conventional identity management and verification systems. By championing security, decentralization, and user empowerment, it sets a new standard for the future of digital and blockchain identity and access management solutions. To understand how this can redefine your identity management and verification processes, book a call with us today and embark on a journey toward a more secure security posture.

The post Blockchain Identity Management: A Complete Guide appeared first on 1Kosmos.


auth0

Use Private Key JWTs to Authenticate Your .NET Application

Add Private Key JWT authentication to your .NET application to empower security in sensitive contexts.
Add Private Key JWT authentication to your .NET application to empower security in sensitive contexts.

liminal (was OWI)

Navigating the Account Takeover Threat Landscape: Prevention Strategies for Phishing and Social Engineering

The post Navigating the Account Takeover Threat Landscape: Prevention Strategies for Phishing and Social Engineering appeared first on Liminal.co.

UNISOT

Protecting Olive Oil Authenticity with UNISOT

In light of the recent significant seizure of counterfeit olive oil by Italian authorities, the need for robust traceability and authenticity in the olive oil industry has never been more critical. The post Protecting Olive Oil Authenticity with UNISOT appeared first on UNISOT.

In light of the recent significant seizure of counterfeit olive oil by Italian authorities,  the need for robust traceability and authenticity in the olive oil industry has never been more critical. Italian authorities confiscated nearly €900,000 worth of fake extra virgin olive oil – that’s about 42 ton – highlighting the pervasive issue of olive oil fraud.

“Some of the 42 tons of oil was already packaged ready for sale. Authorities confiscated 71 tons of what was referred to as an “oily substance” in plastic tanks, as well as 623 liters of chlorophyll, a component of extra virgin olive oil that was being added to oil of a lesser value.

They found packaging equipment, labels purporting that the oil was “extra virgin” when it was clearly not, and commercial documentation including 1,145 customs excise duty stamps that are being studied for forgery, the statement said.
– Barbie Nadeau – CNN”

This fraudulent activity not only undermines consumer trust but also poses serious health risks. UNISOT’s Asset Traceability Platform can play a crucial role in preventing such fraudulent activities.

One of our AgriOnChain customers is Az. Agricola Francesco Pepe. Francesco has successfully implemented UNISOT’s Digital Product Passports to secure his supply chain. This implementation has been instrumental in proving and maintaining his reputation for producing exceptional olive oil, winning numerous awards, and achieving recognition in the Olive Oil Bible FlosOlei. By using UNISOT, he has ensured that his products remain authentic and traceable, reinforcing consumer confidence and loyalty.

Feel free to scan the QR code on the image, which will take you directly to the Digital Product Passport for this Erede Extra Virgin Olive Oil.

“AgriOnChain’s innovative traceability solutions have enabled me to share the story of my premium Italian olive oil with the world. With Digital Product Passports and Smart QR-codes, I can highlight the authenticity, quality, and sustainability of my products to customers everywhere. Through this partnership, I have not only expanded my market reach but also deepened the connection with my customers, creating a community that values the traditions and principles I hold dear. UNISOT’s AgriOnChain technology empowers me to focus on what I do best – producing exceptional olive oil – while they handle the technology seamlessly behind the scenes.” – Francesco Pepe, Az. Agricola Francesco Pepe

How UNISOT’s Asset Traceability Platform can help

COMPREHENSIVE TRACEABILITY

UNISOT’s Platform ensures complete traceability of olive oil from the grove to the consumer. Every step of the production process, including harvesting, pressing and bottling, is securely recorded, digitally signed and timestamped on the blockchain. This transparency ensures that every bottle of olive oil is traceable back to its origin, making it extremely difficult for counterfeit products to infiltrate the supply chain.

PRODUCT AUTHENTICATION

Utilizing unique digital identities and QR codes, UNISOT allows consumers to verify the authenticity of Agricola Francesco Pepe’s olive oil instantly. By scanning the QR code on the bottle, consumers can access detailed information about the product’s origin, production process and quality certifications.

SUPPLY CHAIN INTEGRITY

Our platform enhances the integrity of the supply chain by enabling real-time monitoring and alerts for any anomalies. This capability helps in early detection of potential fraud and ensures that only high-quality olive oil reaches the market.

CONSUMER CONFIDENCE

With increasing incidents of food fraud, consumer trust is paramount. AgriOnChain not only protects the reputation of premium producers like Agricola Francesco Pepe but also assures consumers of the product’s authenticity and superior quality. This trust is vital for sustaining and growing market share in a highly competitive industry.

COMBATING DOCUMENTATION FRAUD

In addition to ensuring product authenticity, AgriOnChain can also help prevent fraud related to commercial documentation. The recent case involving 1,145 customs excise duty stamps suspected of forgery underscores the need for secure and verifiable documentation. By leveraging UNISOT’s Secure Document Collaboration solution, every document related to the production, distribution and sale of olive oil can be securely recorded and verified. This reduces the risk of forgery and ensures that all commercial documents, including excise duty stamps, are legitimate and traceable.

The recent olive oil fraud cases underscore the urgent need for effective measures to safeguard the authenticity of olive oil. UNISOT’s AgriOnChain provides a comprehensive solution to combat fraud, enhance traceability and build consumer trust. As we continue to support high-quality producers like Francesco Pepe, we are committed to ensuring that consumers receive genuine, high-quality olive oil.

We can turn the tide against olive oil fraud and ensure a future where consumers can enjoy genuine, high-quality olive oil with confidence. For more information on how UNISOT can help protect your olive oil brand, visit our website or contact our sales team.

Sources:

The Grocer: https://www.thegrocer.co.uk/commodities/dozens-of-tonnes-of-fake-olive-oil-confiscated-by-italian-authorities/693330.article
Food Safety News: https://www.foodsafetynews.com/?s=olive+oil

The post Protecting Olive Oil Authenticity with UNISOT appeared first on UNISOT.


Dock

Community AMA: Binance Delisting and Future Plans for DOCK

Hey everyone, For those who missed the AMA on July 15th with Nick Lambert (CEO) and Elina Cadouri (COO), here are the key highlights: 1. Nick and Elina expressed their heartfelt gratitude to the community for their support, especially during the recent challenges with the Binance delisting. They emphasized how

Hey everyone,

For those who missed the AMA on July 15th with Nick Lambert (CEO) and Elina Cadouri (COO), here are the key highlights:

1. Nick and Elina expressed their heartfelt gratitude to the community for their support, especially during the recent challenges with the Binance delisting. They emphasized how much the community's loyalty means to them.

2. Binance Delisting: What happened with the Binance delisting?

DOCK was delisted from Binance without any prior warnings and only 7 days after being added to the Monitoring List. Even though Binance did not provide one specific reason for adding DOCK to the Monitoring List, the team believes it could have been due to trading volume or liquidity given that all of the other reasons do not apply to the project. Dock had already been in the process of hiring a  Market Maker to improve the liquidity aspect who began working on this immediately. But Binance, without warning, delisted DOCK after only a week whereas projects are typically given months or even a year to work on removing the monitoring tag Despite this setback, the fundamentals of DOCK remain strong, and the team is committed to moving forward.

3. How is the team ensuring the stability and growth of the project?

We increased validator rewards to support and encourage network stability. Engaged a regulated Market Maker to ensure liquidity for the DOCK token. We are exploring additional exchange listings to broaden DOCK’s availability and reach. Dock’s platform currently serves enterprise clients and client acquisition remains a top priority.

4. Strategy and Goals: What are the main goals and strategies for the next year?

Continuing to execute the published roadmap with exciting new developments. Filed a patent for "Verifier Pay Issuer", a feature that enables issuers to charge for the verification of a credential, showcasing DOCK’s innovative approach to decentralized identity. Several partnership announcements are anticipated, which will drive adoption and demand for the DOCK token.

5. Community: How can the community help support DOCK?

Community members can share and amplify DOCK’s social media content to create awareness. It is important to keep positive engagement and constructive feedback. Dock remains transparent and accessible for any concerns or questions from the community.

6. Roadmap for 2024: What can we look forward to in the second half of 2024?

Dock will develop the ability to verify eIDAS 2.0 and mDL credentials, launch a Cloud Wallet Beta, integrate Biometric-bound credentials, launch an Embeddable Wallet SDK, and roll out support for the OpenID4VC standard.

7. Adoption and Token Demand: Why is client acquisition important for DOCK?

Every transaction on the DOCK network uses DOCK tokens, driving demand as more companies adopt the technology. DOCK is working to ensure that even companies without technical expertise can easily integrate and benefit from their innovative solutions.

8. Audience questions: Are team members personally invested in DOCK?

Yes, several team members hold DOCK in their personal portfolios and has received tokens as part of their compensation.

9. When will DOCK Wallet staking be available?

DOCK Wallet staking will be launched soon, with ongoing efforts to integrate with Nova Wallet.

10. Is there a chance of being relisted on Binance?

The likelihood of being relisted on Binance is low. The team is focusing on other growth opportunities and new exchange listings.

Thank you all for your continued support. We are excited about the future and look forward to sharing more updates soon!

You can watch the entire AMA here: https://youtu.be/mfn6jVKaN60


Shyft Network

The Rising Focus on L2 Solutions in the Crypto Ecosystem

Layer 2 solutions enhance scalability by resolving the high fees and slow processing times associated with Layer 1 blockchains like Bitcoin and Ethereum. By bundling transactions, L2s significantly boost transactions per second, improving speed and reducing costs. Increasing cryptocurrency adoption is driving L2 innovation and investment, leading to diverse technologies and substantial cap
Layer 2 solutions enhance scalability by resolving the high fees and slow processing times associated with Layer 1 blockchains like Bitcoin and Ethereum. By bundling transactions, L2s significantly boost transactions per second, improving speed and reducing costs. Increasing cryptocurrency adoption is driving L2 innovation and investment, leading to diverse technologies and substantial capital inflow.

Time and again, we have seen popular L1 networks — from Bitcoin and Ethereum to Solana — have been clogged with pending transactions, especially during periods of high activity. This not only leads to a significant increase in transaction fees but also prevents users from capitalizing on opportunities in time.

Such occurrences show us that speed and cost remain the biggest technical challenges in the crypto world. The root cause is the blockchain trilemma, which involves trade-offs among the technology’s three most critical aspects: decentralization, security, and scalability.

Layer 1 blockchains prioritize security and decentralization, achieved through a distributed, global network of participants. However, there’s also a downside to Layer 1 blockchain, as they often face scalability issues.

At the largest annual European Ethereum event, the Ethereum Community Conference (EthCC), Ethereum co-founder Vitalik Buterin highlighted Ethereum’s limitations, including its struggle to handle high volumes of transactions. This issue often results in increased fees and delays. He also pointed out the complexities newcomers face when interacting with decentralized applications (dApps) and the challenges associated with becoming a network validator.

To resolve these issues, scalability in particular, developers have taken to building layer 2 solutions.

Layer 2 blockchains are off-chain solutions built on top of L1s. Some of the current popular L2 solutions are Lightning Network, Stacks, Merlin Chain for Bitcoin and Optimism, Arbitrum, Base, and zkSync for Ethereum, to name a few.

Unlike L1, where every transaction has to go through the distributed network for processing and broadcasting, L2s take the load off by performing most of the functions off-chain. This is how popular payment platforms like Visa work. Instead of managing thousands of daily transactions separately, which ends up clogging the network, they batch the transactions for final settlement.

Similarly, L2 solutions offload the burden of managing thousands of transactions from the mainnet. To achieve this, L2s bundle a large number of transactions into a single transaction, which increases the throughput, i.e., transactions per second (TPS). For instance, Bitcoin has a TPS of 5 while Ethereum has 7, and in comparison, the L2 solution boasts tens of thousands in TPS.

Higher throughput helps increase speed and lower fees on these layer 2 solutions. Higher TPS and lower fees improve user experience and enhance the utility. Meanwhile, by settling transactions on the mainnet, they also retain security and decentralization.

L2s, however, aren’t of just one type. They utilize different technologies. Rollups tech is a popular one where transactions are executed off L1 and then rolled into a single piece of data before it gets posted back to the mainnet, where it is reviewed. There are even variations to rollups, such as Optimism and ZK rollups. Then, there are sidechains that work as independent blockchains and run parallel to the main blockchain. To interact with L1, sidechains utilize bridges.

A Massive Wave of L2s

With crypto adoption rising significantly, the need for greater TPS is more important than ever. As of 2024, over half a billion users currently own crypto, and this number is projected to double by the end of this decade.

So, the greater the number of crypto users, the higher the number of transactions happening daily, and the greater the need for higher network capacity. Hence, there is an increasing need for and interest in L2s.

Today, there are over a hundred projects working on enabling improved scalability if we go by Coingecko’s data alone. Top L1 coins are worth $1.8 trillion, with Bitcoin alone accounting for $1.16 trillion of it. L2 coins, meanwhile, have a combined market cap of almost $20 bln. To go further into it, top sidechain coins are worth $1.43 bln, while top Bitcoin sidechains have a $2.6 bln collective market size. Top rollup coins, on the other hand, have a $11.8 bln market cap.

According to L2Beat.com, more than $40 billion worth of capital is locked (TVL) across L2 projects.

The data clearly shows that a lot is happening in the L2 space, but this is just the beginning. As their usage and capital inflow continue to surge, many exciting things are coming up.

Some of the exciting developments currently happening in the sector include L2 network Starknet introducing staking on its ecosystem before the year is over. For scalability, it produces STARK proofs off-chain and then sends them on-chain. In the future, Starknet users will be able to lock their tokens for a 21-day period and earn rewards in proportion to the STRK tokens staked. Its CEO, Eli Ben-Sasson, called this “an important step in building the staking community and technology, offering new opportunities for users and developers.”

Popular Bitcoin L2 Stacks is currently preparing for a big upgrade called Nakamoto to honor the trillion-dollar crypto asset’s pseudonymous creator. With this upgrade, the L2 solution aims to decouple the Stacks block production schedule from that of Bitcoin to solve the congestion issues.

Hong Kong’s licensed crypto exchange operator, HashKey Group, is also planning to launch its Ethereum layer-2 solution, HashKey Chain, in Q4. Even meme coins like Shiba Inu have launched their very own L2 called Shibarium to handle a greater number of users and bring additional value to their ecosystem.

Then, new waves of L2s are entering the space. Blockchain platform Celo is launching its Dango Layer 2 testnet, for which it is utilizing Optimism’s OP Stack. Bitcoin and Ethereum-powered hybrid L2 project BOB raised $1.6mln in a funding round led by Ledger Cathay Fund and contributions from BlackRock, Rarible, Ordinals, Aave, Curve, Magic Eden, Mechanism, Injective, and Babylon.

While Solana boasts a high TPS, projects like Rome are raising funds from Polygon Ventures, HashKey, and angel investors, including Solana’s Anatoly Yakovenko and Austin Federa, to allow Ethereum-based rollups to use Solana as a shared sequencer.

Given L2’s focus on allowing higher throughput and, as a result, greater transaction inclusion, it makes sense everyone is onboarding the L2 train. However, it’s important that equal efforts are being made to attract users to engage on these platforms. For that, we need to focus on simplifying user onboarding and providing a more seamless user experience.

About Shyft Network

Shyft Network powers trust on the blockchain and economies of trust. It is a public protocol designed to drive data discoverability and compliance into blockchain while preserving privacy and sovereignty. SHFT is its native token and fuel of the network.

Shyft Network facilitates the transfer of verifiable data between centralized and decentralized ecosystems. It sets the highest crypto compliance standard and provides the only frictionless Crypto Travel Rule compliance solution while protecting user data.

Visit our website to read more, and follow us on X (Formerly Twitter), GitHub, LinkedIn, Telegram, Medium, and YouTube. Sign up for our newsletter to keep up-to-date on all things privacy and compliance.

The Rising Focus on L2 Solutions in the Crypto Ecosystem was originally published in Shyft Network on Medium, where people are continuing the conversation by highlighting and responding to this story.


IDnow

How to harness the data-sharing capabilities of the private sector, with Lloyd Emmerson.

We sit down with the Director of Strategic Solutions at Cifas – the UK’s largest not-for-profit fraud prevention service – to discuss what changes the company would make to the UK’s fraud strategy, the importance of a whole-of-system solution to a whole-of-system problem and much more. Beginning on a personal note, you seemed to have […]
We sit down with the Director of Strategic Solutions at Cifas – the UK’s largest not-for-profit fraud prevention service – to discuss what changes the company would make to the UK’s fraud strategy, the importance of a whole-of-system solution to a whole-of-system problem and much more. Beginning on a personal note, you seemed to have spent most of your career in fraud prevention in some capacity. What was it that first attracted you to the industry? 

What attracted me to the fraud prevention community initially was the rewarding nature of keeping people safe and, more importantly, trying to stay one step ahead of an adversary that has no moral boundaries. Feeling like you have really made a difference to people’s lives at the end of each working day is what gets me out of bed in the mornings too. 

Perhaps a good place to get into this is with the UK’s fraud strategy, released in 2023. Do you think the earmarked £100 million and 400 new specialist fraud officers is enough to help reduce fraud by 10% by 2025? How likely do you think it will be to achieve? 

Fraud represents almost 40% of all crime in England and Wales and has more than doubled in Scotland over the past nine years. It’s the most prevalent crime in the UK that devastates individuals financially and emotionally, damages business reputations, and targets the public purse meaning wider communities miss out on critical resources and support. 

While the publication of the government’s Fraud Strategy in 2023 was a positive first step, we must do more to turn the tide on the fraud epidemic sweeping the UK. 

We want a future without fraud so welcome the additional investment and extra resources. However, we also recognize we can’t arrest our way out of the problem. The focus needs to be on prevention – that means clear focus on intelligence-led responses that go beyond traditional policing and builds on the premise of organizations working together. 

What would you say are the most common forms of fraud (e.g. money mules, social engineering fraud) in the UK? 

The latest data recorded by our 750-plus membership to the Cifas National Fraud Database (NFD) revealed that account takeover – where a criminal utilizes compromised personal data to hijack an existing account or product – rose by 13% in 2023 compared to 2022. Additionally, we saw a 5% increase in account abuse (often referred to as ‘misuse of facility’). Identity fraud also remained our most dominant case type, accounting for 64% of all 374,000-plus NFD cases. 

Filings to the Cifas Insider Threat Database continued to increase and was up 14% in 2023 compared to the year before. Just under half of these (49%) related to dishonest action by employees.

UK Fraud Awareness Report 2024 Learn more about the British public’s awareness of fraud and their attitudes toward fraud-prevention technology. Get your free copy On May 15, Cifas delivered its ‘Fraud Pledges 2024’ to Number 10 Downing Street. What areas of the government’s approach to tackling fraud do you think need to be improved? What was the reaction and have any next steps been agreed? 

As our Cifas Fraud Pledges make clear, there is no silver bullet for tackling fraud. It is a whole-of-system problem requiring a whole-of-system solution. However, we think a good starting point for the next government would be to appoint a Minister for Economic Crime to drive proper cross-system leadership on this issue. 

Beyond this, it is essential we invest properly in fraud policing and the criminal justice response as well as harness the capabilities of the private sector, through enhanced data-sharing, to disrupt fraud and financial crime and act as the first line of defense. 

Cifas is advocating for social media companies to collaborate more to combat fraud. As social media fraud is such a huge arena, spanning almost every platform, from job search platforms to dating platforms, which parties do you think should be collaborating? And, as many platforms offer a very light-touch identity verification process, how important is it to ensure all users are required to undergo a thorough customer onboarding process? 

No one sector can single-handedly tackle fraud. It is an issue which cuts through industries and across the public and private sectors. The only way we can tackle the problem is by breaking down cross-sector barriers, finding meaningful ways to collaborate, and sharing data and intelligence to ensure there is no weak link in the chain. 

Additionally, it is essential that the biggest online platforms and services most abused by criminals join the counter-fraud community and multi-sector data-sharing initiatives, and including where appropriate, to ensure robust customer screening. 

With most fraud in the UK happening online, across every medium and industry, how can people and companies best protect themselves? 

At Cifas, our whole purpose is to eradicate fraud in the UK. To do so, we must build products and services that help organizations, and their customers fight economic crime and protect themselves against fraud. 

We’re rolling out several preventative solutions to help businesses scale their counter-fraud efforts and keep people safe. One currently being developed is a consumer-based app that proactively protects consumers from identity fraud and puts them in complete control of their personal information, effectively making stolen data worthless. It’s in beta testing and on track to launch in 2025. 

More generally, people and companies must work collaboratively to gain maximum protection against the threats of fraud because criminals will always find new ways to exploit weaknesses, particularly when they’re able to hide in relative anonymity online. We would always urge individuals and companies to stay vigilant and think if something seems too good to be true, it probably is.

What dangers does AI pose to UK’s fraud landscape? What steps can business, government and the local public take to weather the storm? 

Technology and AI are going to be critical in how we deal with the threat posed by criminals’ use of AI and other technologies – it presents both opportunities and challenges to the UK’s fraud landscape. 

Typically, criminals exploit weaknesses and rely on panic and urgency to get personal information and/or financial details. Both businesses and consumers should take a moment and challenge where a piece of communication has come from. Where possible, getting a second opinion is incredibly important and if something doesn’t look right, report it at the earliest opportunity. 

Overall, eradicating fraud is not down to one individual, industry, the government or law enforcement – it’s a collective effort. Our mission at Cifas is to create the largest counter-fraud community that shares data, intelligence and knowledge – all of which can be used to create products and services that protect everyone. To do so, we want to be a central component of a new and more effective UK-wide, data-sharing architecture that unlocks cross-sector collaboration across the public, private and law enforcement domains. 

How have fraud attacks changed over the last 20 years and how do you envision it changing in the next 20? 

Today, you no longer need a complex piece of malware to trick a consumer into handing over key personal data. All that’s required is a mobile phone, a few vague details about an individual and some social engineering for an attack to be successful time and time again. 

I feel that fraud will evolve with AI over the next 20 years in such that the AI component, whatever that may be, will automate a lot of the attack vectors we see today and make them even more scalable and lucrative for criminals to execute. 

Again, it comes back to cross-sector collaboration – if we create the right data flows, safeguards, and frameworks to share real-time risk data, we can defeat fraud.

If you’re interested in more insights from industry insiders and thought leaders from the world of fraud and fraud prevention, check out one of our interviews from our Spotlight Interview series below.

Jinisha Bhatt, financial crime investigator
Paul Stratton, ex-police officer and financial crime trainer
David Birch, global advisor and investor in digital financial services

By

Jody Houton
Senior Content Manager at IDnow
Connect with Jody on LinkedIn

Tuesday, 16. July 2024

Civic

Civic Milestones & Updates: Q2 2024

A few important milestones marked the second quarter of 2024, implying new circumstances for the crypto sector. Most importantly, the SEC approved 8 Ethereum ETFs, including BlackRock and Fidelity, ushering in new growth. At the same time, Bitcoin ETFs grew rapidly after their January launch to about $50 billion. On the US regulatory front, the […] The post Civic Milestones & Updates: Q2 202

A few important milestones marked the second quarter of 2024, implying new circumstances for the crypto sector. Most importantly, the SEC approved 8 Ethereum ETFs, including BlackRock and Fidelity, ushering in new growth. At the same time, Bitcoin ETFs grew rapidly after their January launch to about $50 billion. On the US regulatory front, the […]

The post Civic Milestones & Updates: Q2 2024 appeared first on Civic Technologies, Inc..


KuppingerCole

LoginRadius CIAM Platform

by John Tolbert This KuppingerCole Executive View report looks at the issues and options available to IT managers and security strategists to manage consumer and customer identity access management. A technical review of the LoginRadius CIAM platform is included.

by John Tolbert

This KuppingerCole Executive View report looks at the issues and options available to IT managers and security strategists to manage consumer and customer identity access management. A technical review of the LoginRadius CIAM platform is included.